hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
420d148bc469105cd3d8585bbbb8f38f1d6ec875 | 2,058 | py | Python | metaflow/plugins/env_escape/configurations/test_lib_impl/test_lib.py | RobBlumberg/metaflow | 9f737e6026eee250c1593a2cb1d1c4b19a00adf4 | [
"Apache-2.0"
] | 5,821 | 2019-12-03T17:57:52.000Z | 2022-03-31T22:55:12.000Z | metaflow/plugins/env_escape/configurations/test_lib_impl/test_lib.py | RobBlumberg/metaflow | 9f737e6026eee250c1593a2cb1d1c4b19a00adf4 | [
"Apache-2.0"
] | 605 | 2019-12-03T23:09:32.000Z | 2022-03-31T16:15:05.000Z | metaflow/plugins/env_escape/configurations/test_lib_impl/test_lib.py | RobBlumberg/metaflow | 9f737e6026eee250c1593a2cb1d1c4b19a00adf4 | [
"Apache-2.0"
] | 539 | 2019-12-03T18:25:53.000Z | 2022-03-29T18:22:33.000Z | import functools
class MyBaseException(Exception):
pass
class SomeException(MyBaseException):
pass
class TestClass1(object):
cls_object = 25
def __init__(self, value):
self._value = value
self._value2 = 123
def unsupported_method(self):
pass
def print_value(self):
return self._value
def __str__(self):
return "My str representation is %s" % str(self._value)
def __repr__(self):
return "My repr representation is %s" % str(self._value)
@property
def value(self):
return self._value
@value.setter
def value(self, value):
self._value = value
def to_class2(self, count, stride=1):
return TestClass2(self._value, stride, count)
@staticmethod
def somethingstatic(val):
return val + 42
@classmethod
def somethingclass(cls):
return cls.cls_object
@property
def override_value(self):
return self._value2
@override_value.setter
def override_value(self, value):
self._value2 = value
class TestClass2(object):
def __init__(self, value, stride, count):
self._mylist = [value + stride * i for i in range(count)]
def something(self, val):
return "In Test2 with %s" % val
def __iter__(self):
self._pos = 0
return self
def __next__(self):
if self._pos < len(self._mylist):
self._pos += 1
return self._mylist[self._pos - 1]
raise StopIteration
class TestClass3(object):
def __init__(self):
print("I am Class3")
def thirdfunction(self, val):
print("Got value: %s" % val)
# raise AttributeError("Some weird error")
def raiseSomething(self):
raise SomeException("Something went wrong")
def __hidden(self, name, value):
setattr(self, name, value)
def weird_indirection(self, name):
return functools.partial(self.__hidden, name)
def test_func(*args, **kwargs):
return "In test func"
test_value = 1
| 20.376238 | 65 | 0.623907 | 246 | 2,058 | 4.97561 | 0.321138 | 0.080882 | 0.045752 | 0.046569 | 0.153595 | 0.047386 | 0 | 0 | 0 | 0 | 0 | 0.014825 | 0.278912 | 2,058 | 100 | 66 | 20.58 | 0.809973 | 0.019436 | 0 | 0.138462 | 0 | 0 | 0.062996 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.338462 | false | 0.046154 | 0.015385 | 0.169231 | 0.646154 | 0.046154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
421a32da4769d80ffba1268d31b7a676642e60fc | 1,009 | py | Python | s3prl/upstream/example/hubconf.py | hhhaaahhhaa/s3prl | a469787f05c42196c4d989555082f5fd9dcbe8a6 | [
"Apache-2.0"
] | 856 | 2021-01-15T15:40:32.000Z | 2022-03-31T07:08:17.000Z | s3prl/upstream/example/hubconf.py | hhhaaahhhaa/s3prl | a469787f05c42196c4d989555082f5fd9dcbe8a6 | [
"Apache-2.0"
] | 210 | 2021-01-15T13:28:50.000Z | 2022-03-30T06:13:51.000Z | s3prl/upstream/example/hubconf.py | hhhaaahhhaa/s3prl | a469787f05c42196c4d989555082f5fd9dcbe8a6 | [
"Apache-2.0"
] | 208 | 2021-01-15T03:03:12.000Z | 2022-03-31T08:33:27.000Z | from .expert import UpstreamExpert as _UpstreamExpert
def customized_upstream(*args, **kwargs):
"""
To enable your customized pretrained model, you only need to implement
upstream/example/expert.py and leave this file as is. This file is
used to register the UpstreamExpert in upstream/example/expert.py
The following is a brief introduction of the registration mechanism.
The s3prl/hub.py will collect all the entries registered in this file
(callable variables without the underscore prefix) as a centralized
upstream factory. One can pick up this upstream from the factory via
1.
from s3prl.hub import customized_upstream
model = customized_upstream(ckpt, model_config)
2.
model = torch.hub.load(
'your_s3prl_path',
'customized_upstream',
ckpt,
model_config,
source='local',
)
Our run_downstream.py and downstream/runner.py follows the first usage
"""
return _UpstreamExpert(*args, **kwargs)
| 32.548387 | 74 | 0.716551 | 133 | 1,009 | 5.353383 | 0.548872 | 0.101124 | 0.058989 | 0.064607 | 0.092697 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006378 | 0.222993 | 1,009 | 30 | 75 | 33.633333 | 0.901786 | 0.767096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
421cd1f840cd074e3eb92df46eaaf5c4a3768113 | 1,891 | py | Python | boa3/model/builtin/interop/oracle/oracletype.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3/model/builtin/interop/oracle/oracletype.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3/model/builtin/interop/oracle/oracletype.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from __future__ import annotations
from typing import Any, Dict, Optional
from boa3.model.method import Method
from boa3.model.property import Property
from boa3.model.type.classes.classarraytype import ClassArrayType
from boa3.model.variable import Variable
class OracleType(ClassArrayType):
"""
A class used to represent Oracle class
"""
def __init__(self):
super().__init__('Oracle')
self._variables: Dict[str, Variable] = {}
self._class_methods: Dict[str, Method] = {}
self._constructor: Method = None
@property
def instance_variables(self) -> Dict[str, Variable]:
return self._variables.copy()
@property
def class_variables(self) -> Dict[str, Variable]:
return {}
@property
def properties(self) -> Dict[str, Property]:
return {}
@property
def static_methods(self) -> Dict[str, Method]:
return {}
@property
def class_methods(self) -> Dict[str, Method]:
# avoid recursive import
from boa3.model.builtin.interop.oracle.oraclegetpricemethod import OracleGetPriceMethod
from boa3.model.builtin.interop.oracle.oraclerequestmethod import OracleRequestMethod
if len(self._class_methods) == 0:
self._class_methods = {
'get_price': OracleGetPriceMethod(),
'request': OracleRequestMethod()
}
return self._class_methods
@property
def instance_methods(self) -> Dict[str, Method]:
return {}
def constructor_method(self) -> Optional[Method]:
return self._constructor
@classmethod
def build(cls, value: Any = None) -> OracleType:
if value is None or cls._is_type_of(value):
return _Oracle
@classmethod
def _is_type_of(cls, value: Any):
return isinstance(value, OracleType)
_Oracle = OracleType()
| 27.014286 | 95 | 0.657324 | 204 | 1,891 | 5.901961 | 0.279412 | 0.046512 | 0.064784 | 0.044851 | 0.181063 | 0.16113 | 0 | 0 | 0 | 0 | 0 | 0.004912 | 0.24643 | 1,891 | 69 | 96 | 27.405797 | 0.84 | 0.032787 | 0 | 0.255319 | 0 | 0 | 0.012135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.212766 | false | 0 | 0.170213 | 0.148936 | 0.595745 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
42228f1e28d8899ed8da922c4eb2bd3b92ca4e69 | 191 | py | Python | photo-hub/api/pagination.py | RodionChachura/photo-hub | 20ec008076a34cb09b289fda0557e2efc7e06232 | [
"MIT"
] | null | null | null | photo-hub/api/pagination.py | RodionChachura/photo-hub | 20ec008076a34cb09b289fda0557e2efc7e06232 | [
"MIT"
] | null | null | null | photo-hub/api/pagination.py | RodionChachura/photo-hub | 20ec008076a34cb09b289fda0557e2efc7e06232 | [
"MIT"
] | null | null | null | from rest_framework.pagination import PageNumberPagination
class StandardPagination(PageNumberPagination):
page_size = 30
page_size_query_param = 'page_size'
max_page_size = 1000 | 31.833333 | 58 | 0.811518 | 22 | 191 | 6.681818 | 0.681818 | 0.217687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036585 | 0.141361 | 191 | 6 | 59 | 31.833333 | 0.859756 | 0 | 0 | 0 | 0 | 0 | 0.046875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4269db32f55f118da9ba1a4ffe9262967fe30e06 | 283 | py | Python | 1501-1600/1560-Most Visited Sector in a Circular Track/1560-Most Visited Sector in a Circular Track.py | jiadaizhao/LeetCode | 4ddea0a532fe7c5d053ffbd6870174ec99fc2d60 | [
"MIT"
] | 49 | 2018-05-05T02:53:10.000Z | 2022-03-30T12:08:09.000Z | 1501-1600/1560-Most Visited Sector in a Circular Track/1560-Most Visited Sector in a Circular Track.py | ptx-c/LeetCode | 4ddea0a532fe7c5d053ffbd6870174ec99fc2d60 | [
"MIT"
] | 11 | 2017-12-15T22:31:44.000Z | 2020-10-02T12:42:49.000Z | 1501-1600/1560-Most Visited Sector in a Circular Track/1560-Most Visited Sector in a Circular Track.py | ptx-c/LeetCode | 4ddea0a532fe7c5d053ffbd6870174ec99fc2d60 | [
"MIT"
] | 28 | 2017-12-05T10:56:51.000Z | 2022-01-26T18:18:27.000Z | class Solution:
def mostVisited(self, n: int, rounds: List[int]) -> List[int]:
start, end = rounds[0], rounds[-1]
if end >= start:
return list(range(start, end + 1))
else:
return list(range(1, end + 1)) + list(range(start, n + 1))
| 35.375 | 70 | 0.533569 | 39 | 283 | 3.871795 | 0.435897 | 0.178808 | 0.198676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030612 | 0.30742 | 283 | 7 | 71 | 40.428571 | 0.739796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
4283b88a83b93254b8e97d4642f0ca0d5d69279d | 68 | py | Python | examples/02_pybind/01_basic/example.py | BlockResearchGroup/WS_interoperability | 604ab29c242b30b2ee9125a589afe69010ba1844 | [
"MIT"
] | 1 | 2019-07-26T22:25:25.000Z | 2019-07-26T22:25:25.000Z | examples/02_pybind/01_basic/example.py | BlockResearchGroup/WS_interoperability | 604ab29c242b30b2ee9125a589afe69010ba1844 | [
"MIT"
] | 5 | 2019-04-14T21:07:03.000Z | 2019-05-27T21:46:37.000Z | examples/02_pybind/01_basic/example.py | BlockResearchGroup/WS_interoperability | 604ab29c242b30b2ee9125a589afe69010ba1844 | [
"MIT"
] | null | null | null | # example.py
import basic
result = basic.add(1, 5)
print(result)
| 8.5 | 24 | 0.691176 | 11 | 68 | 4.272727 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.176471 | 68 | 7 | 25 | 9.714286 | 0.803571 | 0.147059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
429b9b03d73a5f7f9bbccc750f09ea936a25f8a0 | 78 | py | Python | __init__.py | bbockelm/glideinWMS | a2b39e3d4ff6c4527efad54b1eefe728a4ec9d18 | [
"BSD-3-Clause"
] | null | null | null | __init__.py | bbockelm/glideinWMS | a2b39e3d4ff6c4527efad54b1eefe728a4ec9d18 | [
"BSD-3-Clause"
] | 3 | 2015-12-02T19:37:45.000Z | 2016-01-20T03:21:48.000Z | __init__.py | bbockelm/glideinWMS | a2b39e3d4ff6c4527efad54b1eefe728a4ec9d18 | [
"BSD-3-Clause"
] | 1 | 2015-12-01T23:02:41.000Z | 2015-12-01T23:02:41.000Z | __all__=["factory","frontend","lib","tools","creation","install","unittests"]
| 39 | 77 | 0.692308 | 8 | 78 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 78 | 1 | 78 | 78 | 0.649351 | 0 | 0 | 0 | 0 | 0 | 0.602564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
42a96ad3b83164695c47573ef1f876f36eb4d891 | 1,148 | py | Python | pybloxy/classes/http.py | R0bl0x10501050/roblox.py | cbbb25878627c2d837caaeb7edf37d0aeda615ae | [
"MIT"
] | null | null | null | pybloxy/classes/http.py | R0bl0x10501050/roblox.py | cbbb25878627c2d837caaeb7edf37d0aeda615ae | [
"MIT"
] | null | null | null | pybloxy/classes/http.py | R0bl0x10501050/roblox.py | cbbb25878627c2d837caaeb7edf37d0aeda615ae | [
"MIT"
] | null | null | null | import logging
import requests
class Http:
def sendRequest(url):
payload = requests.get(str(url))
statusCode = payload.status_code
content = payload.content
if statusCode != 200:
return logging.error(f"[Pybloxy - GET] Something went wrong! Error Code: {statusCode}")
return content
def postRequest(url, payload):
payload = requests.post(str(url), data = payload)
statusCode = payload.status_code
content = payload.content
if statusCode != 200:
return logging.error(f"[Pybloxy - POST] Something went wrong! Error Code: {statusCode}")
return content
def patchRequest(url, payload):
payload = requests.patch(str(url), data = payload)
statusCode = payload.status_code
content = payload.content
if statusCode != 200:
return logging.error(f"[Pybloxy - PATCH] Something went wrong! Error Code: {statusCode}")
return content
def deleteRequest(url, payload):
payload = requests.delete(str(url))
statusCode = payload.status_code
content = payload.content
if statusCode != 200:
return logging.error(f"[Pybloxy - DELETE] Something went wrong! Error Code: {statusCode}")
return content | 26.697674 | 93 | 0.722997 | 141 | 1,148 | 5.858156 | 0.219858 | 0.048426 | 0.11138 | 0.130751 | 0.739709 | 0.739709 | 0.739709 | 0.739709 | 0.679177 | 0.486683 | 0 | 0.012632 | 0.172474 | 1,148 | 43 | 94 | 26.697674 | 0.856842 | 0 | 0 | 0.516129 | 0 | 0 | 0.221062 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.064516 | 0 | 0.483871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
42b002236c965251bc510639be4dce4dd1300339 | 2,946 | py | Python | ZZZ_OtherDemo/00-dyld-832.7.3/testing/kernel-cache-tests/kext-missing-weak-bind/test.py | 1079278593/TreasureChest | 8b1ebe04ed7c2ed399c4ecf3b75b3fee0a1aced8 | [
"MIT"
] | null | null | null | ZZZ_OtherDemo/00-dyld-832.7.3/testing/kernel-cache-tests/kext-missing-weak-bind/test.py | 1079278593/TreasureChest | 8b1ebe04ed7c2ed399c4ecf3b75b3fee0a1aced8 | [
"MIT"
] | null | null | null | ZZZ_OtherDemo/00-dyld-832.7.3/testing/kernel-cache-tests/kext-missing-weak-bind/test.py | 1079278593/TreasureChest | 8b1ebe04ed7c2ed399c4ecf3b75b3fee0a1aced8 | [
"MIT"
] | null | null | null | #!/usr/bin/python2.7
import os
import KernelCollection
# Check that weak binds can be missing, so long as we check for the magic symbol
def check(kernel_cache):
kernel_cache.buildKernelCollection("arm64", "/kext-missing-weak-bind/main.kc", "/kext-missing-weak-bind/main.kernel", "/kext-missing-weak-bind/extensions", ["com.apple.foo", "com.apple.bar"], [])
kernel_cache.analyze("/kext-missing-weak-bind/main.kc", ["-layout", "-arch", "arm64"])
assert kernel_cache.dictionary()["cache-segments"][3]["name"] == "__DATA_CONST"
assert kernel_cache.dictionary()["cache-segments"][3]["vmAddr"] == "0x18000"
assert len(kernel_cache.dictionary()["dylibs"]) == 3
assert kernel_cache.dictionary()["dylibs"][0]["name"] == "com.apple.kernel"
assert kernel_cache.dictionary()["dylibs"][1]["name"] == "com.apple.bar"
assert kernel_cache.dictionary()["dylibs"][2]["name"] == "com.apple.foo"
# Symbols
kernel_cache.analyze("/kext-missing-weak-bind/main.kc", ["-symbols", "-arch", "arm64"])
# kernel
assert kernel_cache.dictionary()["dylibs"][0]["name"] == "com.apple.kernel"
assert kernel_cache.dictionary()["dylibs"][0]["global-symbols"][2]["name"] == "_gOSKextUnresolved"
assert kernel_cache.dictionary()["dylibs"][0]["global-symbols"][2]["vmAddr"] == "0x20000"
# Check the fixups
kernel_cache.analyze("/kext-missing-weak-bind/main.kc", ["-fixups", "-arch", "arm64"])
assert len(kernel_cache.dictionary()["fixups"]) == 4
assert kernel_cache.dictionary()["fixups"]["0x18000"] == "kc(0) + 0x20000"
assert kernel_cache.dictionary()["fixups"]["0x18008"] == "kc(0) + 0x20000"
assert kernel_cache.dictionary()["fixups"]["0x18010"] == "kc(0) + 0x20000"
assert kernel_cache.dictionary()["fixups"]["0x18018"] == "kc(0) + 0x20000"
assert len(kernel_cache.dictionary()["dylibs"]) == 3
assert kernel_cache.dictionary()["dylibs"][0]["name"] == "com.apple.kernel"
assert kernel_cache.dictionary()["dylibs"][0]["fixups"] == "none"
assert kernel_cache.dictionary()["dylibs"][1]["name"] == "com.apple.bar"
assert kernel_cache.dictionary()["dylibs"][1]["fixups"] == "none"
assert kernel_cache.dictionary()["dylibs"][2]["name"] == "com.apple.foo"
assert kernel_cache.dictionary()["dylibs"][2]["fixups"] == "none"
# [~]> xcrun -sdk iphoneos.internal cc -arch arm64 -Wl,-static -mkernel -nostdlib -Wl,-add_split_seg_info -Wl,-rename_section,__TEXT,__text,__TEXT_EXEC,__text -Wl,-e,__start -Wl,-pagezero_size,0x0 -Wl,-pie -Wl,-sectcreate,__LINKINFO,__symbolsets,SymbolSets.plist -Wl,-segprot,__LINKINFO,r--,r-- main.c -o main.kernel
# [~]> xcrun -sdk iphoneos.internal cc -arch arm64 -Wl,-kext -mkernel -nostdlib -Wl,-add_split_seg_info foo.c -o extensions/foo.kext/foo
# [~]> xcrun -sdk iphoneos.internal cc -arch arm64 -Wl,-kext -mkernel -nostdlib -Wl,-add_split_seg_info bar.c -o extensions/bar.kext/bar -Wl,-fixup_chains
# [~]> rm -r extensions/*.kext/*.ld
| 62.680851 | 316 | 0.681942 | 391 | 2,946 | 4.987212 | 0.250639 | 0.146667 | 0.226154 | 0.249231 | 0.674872 | 0.630769 | 0.6 | 0.528205 | 0.443077 | 0.345128 | 0 | 0.038476 | 0.108961 | 2,946 | 46 | 317 | 64.043478 | 0.704381 | 0.260353 | 0 | 0.321429 | 0 | 0 | 0.348548 | 0.088981 | 0 | 0 | 0.032273 | 0 | 0.75 | 1 | 0.035714 | false | 0 | 0.071429 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
35f622ff3fa5187c3265b7d1252636eaf5af175d | 5,708 | py | Python | tests/test_camera.py | Gokender/kameramera | 7ebd9a196809c1e7ab117bb11b90bcea8d1eb8e7 | [
"MIT"
] | null | null | null | tests/test_camera.py | Gokender/kameramera | 7ebd9a196809c1e7ab117bb11b90bcea8d1eb8e7 | [
"MIT"
] | null | null | null | tests/test_camera.py | Gokender/kameramera | 7ebd9a196809c1e7ab117bb11b90bcea8d1eb8e7 | [
"MIT"
] | null | null | null | import unittest
from kameramera import camera
class Camera(unittest.TestCase):
def setUp(self):
self.camera = camera.Camera(camera_id='canon_ae1')
def test_general_manufacturer(self):
self.assertEqual(self.camera.general.manufacturer, 'Canon')
def test_general_name(self):
self.assertEqual(self.camera.general.name, 'Canon AE-1')
def test_general_type(self):
self.assertEqual(self.camera.general.type, 'SLR')
def test_general_format(self):
self.assertEqual(self.camera.general.format, '24x35')
def test_general_made_in(self):
self.assertEqual(self.camera.general.made_in, 'Japan')
def test_general_date(self):
self.assertEqual(self.camera.general.date, '1976-1984')
def test_general_body_construction(self):
self.assertEqual(self.camera.general.body_construction, 'metal')
def test_general_mount_threads(self):
self.assertEqual(self.camera.general.mount_threads, '1/4"')
def test_general_dimension(self):
self.assertEqual(self.camera.general.dimension, '141x87x47.5 mm')
def test_general_weight(self):
self.assertEqual(self.camera.general.weight, '620g')
def test_optics_lenses(self):
self.assertEqual(self.camera.optics.lenses, 'interchangeable')
def test_optics_lenses_mount(self):
self.assertEqual(self.camera.optics.lenses_mount, 'Canon FD')
def test_sighting_type(self):
self.assertEqual(self.camera.sighting.type, 'fixed eye-level pentaprism')
def test_sighting_display(self):
self.assertEqual(self.camera.sighting.display, False)
def test_sighting_viewfinder_rangefinder(self):
self.assertEqual(self.camera.sighting.viewfinder.rangefinder,
['split_image', 'microprism'])
def test_sighting_viewfinder_aperture(self):
self.assertEqual(self.camera.sighting.viewfinder.aperture, True)
def test_sighting_viewfinder_exposure_indicator(self):
self.assertEqual(self.camera.sighting.viewfinder.exposure_indicator, True)
def test_sighting_viewfinder_flash_indicator(self):
self.assertEqual(self.camera.sighting.viewfinder.flash_indicator, True)
def test_focus_manual(self):
self.assertEqual(self.camera.focus.manual, True)
def test_focus_autofocus(self):
self.assertEqual(self.camera.focus.autofocus, False)
def test_focus_stabilization(self):
self.assertEqual(self.camera.focus.stabilization, False)
def test_focus_depth_of_field(self):
self.assertEqual(self.camera.focus.depth_of_field, True)
def test_shutter_type(self):
self.assertEqual(self.camera.shutter.type, None)
def test_shutter_shutter_speeds(self):
self.assertEqual(self.camera.shutter.shutter_speeds, [
'2',
'1',
'1/2',
'1/4',
'1/8',
'1/15',
'1/30',
'1/60',
'1/125',
'1/250',
'1/500',
'1/1000'
])
def test_shutter_pose(self):
self.assertEqual(self.camera.shutter.pose, 'B')
def test_shutter_self_timer(self):
self.assertEqual(self.camera.shutter.self_timer, 10)
def test_exposure_mode(self):
self.assertEqual(self.camera.exposure.mode, ['M','S'])
def test_exposure_correction(self):
self.assertEqual(self.camera.exposure.correction, 1.5)
def test_exposure_measure_type(self):
self.assertEqual(self.camera.exposure.measure.type, 'TTL')
def test_exposure_measure_light_sensor(self):
self.assertEqual(self.camera.exposure.measure.light_sensor,
'silicon photon cell')
def test_exposure_measure_metering_mode(self):
self.assertEqual(self.camera.exposure.measure.metering_mode,
'center-weighted average metering')
def test_exposure_measure_memory(self):
self.assertEqual(self.camera.exposure.measure.memory, True)
def test_film_format(self):
self.assertEqual(self.camera.film.format, 135)
def test_film_advance(self):
self.assertEqual(self.camera.film.advance, 'manual')
def test_film_frame_counter(self):
self.assertEqual(self.camera.film.frame_counter, True)
def test_film_film_speed(self):
self.assertEqual(self.camera.film.film_speed, [
25,
32,
40,
50,
64,
80,
100,
125,
160,
200,
250,
320,
400,
500,
640,
800,
1000,
1250,
1600,
2000,
2500,
3200
])
def test_flash_built_in(self):
self.assertEqual(self.camera.flash.built_in, False)
def test_flash_hot_shoe(self):
self.assertEqual(self.camera.flash.hot_shoe, True)
def test_flash_synchronization(self):
self.assertEqual(self.camera.flash.synchronization, '1/60')
def test_power_required(self):
self.assertEqual(self.camera.power.required, True)
def test_power_source_number(self):
self.assertEqual(self.camera.power.source[0].number, 1)
def test_power_source_voltage(self):
self.assertEqual(self.camera.power.source[0].voltage, 6)
def test_power_source_type(self):
self.assertEqual(self.camera.power.source[0].type, [
'alkaline-manganese',
'silver oxyde',
'lithium'
]) | 31.711111 | 82 | 0.637176 | 660 | 5,708 | 5.310606 | 0.215152 | 0.100428 | 0.233096 | 0.282168 | 0.49786 | 0.481312 | 0.180314 | 0.067047 | 0 | 0 | 0 | 0.033483 | 0.257008 | 5,708 | 180 | 83 | 31.711111 | 0.792973 | 0 | 0 | 0.022388 | 0 | 0 | 0.050972 | 0 | 0 | 0 | 0 | 0 | 0.320896 | 1 | 0.328358 | false | 0 | 0.014925 | 0 | 0.350746 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c4009ade7b5eb056201eed0338579ec28e08eb56 | 226 | py | Python | countdownhype/urls.py | chri4354/BeeMe_platform | b73843d9146c5ba54a63a8839980ee7c8024e80d | [
"CC-BY-4.0"
] | null | null | null | countdownhype/urls.py | chri4354/BeeMe_platform | b73843d9146c5ba54a63a8839980ee7c8024e80d | [
"CC-BY-4.0"
] | 8 | 2020-06-06T01:55:55.000Z | 2022-03-12T00:31:52.000Z | countdownhype/urls.py | chri4354/BeeMe_platform | b73843d9146c5ba54a63a8839980ee7c8024e80d | [
"CC-BY-4.0"
] | null | null | null | from django.urls import path, re_path
from . import views
urlpatterns = [
path('', views.index, name='index'),
path('countdown/', views.countdown, name='countdown'),
#re_path(r'.+', views.redir, name='redir'),
]
| 22.6 | 58 | 0.650442 | 29 | 226 | 5 | 0.448276 | 0.082759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159292 | 226 | 9 | 59 | 25.111111 | 0.763158 | 0.185841 | 0 | 0 | 0 | 0 | 0.131148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
c42e658ca9b791acfc8cc494fe87a5ee5b2f994f | 1,006 | py | Python | jubakit/test/__init__.py | vishalbelsare/jubakit | f6252ba627ce4e2e42eb9aafaaf05c882bc1c678 | [
"MIT"
] | 12 | 2016-04-11T04:49:08.000Z | 2019-02-08T01:43:46.000Z | jubakit/test/__init__.py | vishalbelsare/jubakit | f6252ba627ce4e2e42eb9aafaaf05c882bc1c678 | [
"MIT"
] | 138 | 2016-04-11T05:57:48.000Z | 2020-09-26T03:09:31.000Z | jubakit/test/__init__.py | vishalbelsare/jubakit | f6252ba627ce4e2e42eb9aafaaf05c882bc1c678 | [
"MIT"
] | 10 | 2016-04-11T03:18:45.000Z | 2018-04-14T10:11:15.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function, unicode_literals
__all__ = ['requireSklearn']
from jubakit.compat import PYTHON3
try:
import embedded_jubatus
embedded_available = True
except ImportError:
embedded_available = False
try:
import numpy
import scipy
import sklearn
sklearn_available = True
except ImportError:
sklearn_available = False
try:
from unittest import skipUnless
def requireSklearn(target):
return skipUnless(sklearn_available, 'requires scikit-learn')(target)
def requirePython3(target):
return skipUnless(PYTHON3, 'requires Python 3.x')(target)
def requireEmbedded(target):
return skipUnless(embedded_available, 'requires embedded_jubatus')(target)
except ImportError:
def requireSklearn(target):
return target if sklearn_available else None
def requirePython3(target):
return target if PYTHON3 else None
def requireEmbedded(target):
return target if embedded_available else None
| 26.473684 | 82 | 0.777336 | 116 | 1,006 | 6.560345 | 0.37931 | 0.094612 | 0.086728 | 0.078844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008226 | 0.154076 | 1,006 | 37 | 83 | 27.189189 | 0.886016 | 0.020875 | 0 | 0.4 | 0 | 0 | 0.080366 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.333333 | 0.2 | 0.733333 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 3 |
c4311123dd5258af551865a612948896d2a1bbc9 | 2,132 | py | Python | registration/email.py | openstack-kr/openinfradays-2018 | 9eb0e284ab95e177dc4acca17d63ccbdaff67fb1 | [
"Apache-2.0"
] | null | null | null | registration/email.py | openstack-kr/openinfradays-2018 | 9eb0e284ab95e177dc4acca17d63ccbdaff67fb1 | [
"Apache-2.0"
] | 1 | 2018-06-17T02:21:41.000Z | 2018-06-17T02:21:41.000Z | registration/email.py | openstack-kr/openinfradays-2018 | 9eb0e284ab95e177dc4acca17d63ccbdaff67fb1 | [
"Apache-2.0"
] | 1 | 2018-05-31T11:39:02.000Z | 2018-05-31T11:39:02.000Z | from django.core.mail import EmailMessage
from django.conf import settings
def send_email(name, date, email):
txt = """
<html>
<body>
<table cellpadding='0' cellspacing='0' width='100%' border='0'>
<tbody>
<tr>
<td style='word-wrap:break-word;font-size:0px;padding:0px;padding-bottom:10px' align='left'>
<div style='color:#000000;font-family:Spoqa Han Sans,sans-serif;font-size:20px;line-height:22px;letter-spacing:-0.8px;text-align:left'>
안녕하세요 <span style='color:#3832D8'>{0}</span> 님,
</div>
</td>
</tr>
<tr>
<td style='word-wrap:break-word;font-size:0px;padding:0px;padding-bottom:10px' align='left'>
<div style='color:#000000;font-family:Spoqa Han Sans,sans-serif;font-size:30px;line-height:1.3;letter-spacing:-1.1px; text-align:left'>
OpenInfra Days Korea 2018
</div>
</td>
</tr>
<tr>
<td style='word-wrap:break-word;font-size:0px;padding:0px;padding-bottom:30px' align='left'>
<div style='color:#000000;font-family:Spoqa Han Sans,sans-serif;font-size:20px;line-height:22px;letter-spacing:-0.8px;text-align:left'>
초청 티켓 등록이 완료되었습니다.
</div>
</td>
</tr>
<tr>
<td style='word-wrap:break-word;font-size:0px;padding:0px;padding-bottom:30px' align='left'>
<div style='color:#000000;font-family:Spoqa Han Sans,sans-serif;font-size:20px;line-height:22px;letter-spacing:-0.8px;text-align:left'>
참가 일자 : {1}
</div>
</td>
</tr>
<tr>
<td style='word-wrap:break-word;font-size:0px;padding:0px' align='left'>
<div style='color:#000000;font-family:Spoqa Han Sans,sans-serif;font-size:20px;line-height:22px;letter-spacing:-0.8px;text-align:left'>
<a href="http://invite.openinfradays.kr">티켓 확인</a>
</div>
</td>
</tr>
</tbody>
</table>
</body>
</html>
""".format(name, date)
email = EmailMessage(settings.EMAIL_TITLE, txt, to=(email,))
email.content_subtype = "html"
return email.send()
| 38.071429 | 147 | 0.60272 | 298 | 2,132 | 4.302013 | 0.285235 | 0.062403 | 0.035101 | 0.050702 | 0.669267 | 0.669267 | 0.669267 | 0.669267 | 0.669267 | 0.669267 | 0 | 0.05716 | 0.22045 | 2,132 | 55 | 148 | 38.763636 | 0.7142 | 0 | 0 | 0.54902 | 0 | 0.196078 | 0.874237 | 0.464101 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.039216 | 0 | 0.078431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c44e49588a5ae8bdc21c7e3ab11388f043afd9f0 | 8,816 | py | Python | codegen/codegen/fblas_routine.py | spcl/fblas | 96425fbdbaeab6f43997d839836b8224a04f3b53 | [
"BSD-3-Clause"
] | 68 | 2019-02-07T21:30:21.000Z | 2022-02-16T20:09:27.000Z | codegen/codegen/fblas_routine.py | spcl/fblas | 96425fbdbaeab6f43997d839836b8224a04f3b53 | [
"BSD-3-Clause"
] | 2 | 2019-03-15T17:49:03.000Z | 2019-07-24T14:05:35.000Z | codegen/codegen/fblas_routine.py | spcl/fblas | 96425fbdbaeab6f43997d839836b8224a04f3b53 | [
"BSD-3-Clause"
] | 25 | 2019-03-15T03:00:15.000Z | 2021-08-04T10:21:43.000Z | """
FBlas Routine class: it used to represent a routine definition, specified by the user using JSON file.
It is used by the Host and Module Codegen (specified by the _codegen variable). Accordingly,
some class members could be invalid.
"""
from codegen import fblas_types
from codegen import generator_definitions
class FBLASRoutine:
# name of the routine according to blas (without indication of the precision)
_blas_name = ""
# user name for the routine
_user_name = ""
# spatial parallelism (vectorization width)
_width = generator_definitions.DEFAULT_WIDTH
# data type used in routine
_type: fblas_types.RoutineType
_type_str: str
# if the routine has to use shift registers (e.g. double precision) or not
# and in case how big they should be
_uses_shift_registers = False
_size_shift_registers = 0
# The type of codegen:Host/Modules
_codegen = None
# inx/incy
_incx = 1
_incy = 1
# Level 2/3: tile sizes
_tile_n_size = generator_definitions.DEFAULT_TILE_SIZE
_tile_m_size = generator_definitions.DEFAULT_TILE_SIZE
# Matrix characteristics
_order = None
_diag = None
_transposeA = None
_transposeB = None
_side = None
_uplo = None
# input/output channels (useful for Module Codegen)
# these are instance member dictionaries "required_channel_name" -> "user_name"
_input_channels = None
_output_channels = None
# Tiles and element order (for level2/3 that works with matrices)
# The order is RowMajor if tiles/element are row streamed
# otherwise it is ColumnMajor
_tiles_A_order: fblas_types.FblasOrder = fblas_types.FblasOrder.FblasRowMajor
_elements_A_order: fblas_types.FblasOrder = fblas_types.FblasOrder.FblasRowMajor
# Indicates whether or not this routines has a 2D computatioal tile (e.g. GEMM)
_has_2D_computational_tile = False
# If yes, there are the two vectorization width
_width_x = 0
_width_y = 0
_tile_size = 0
_systolic = False
_vect_size = 0
def __init__(self, blas_name: str, user_name: str, type: fblas_types.RoutineType, platform: fblas_types.Platform, codegen: fblas_types.FblasCodegen):
self._blas_name = blas_name
self._user_name = user_name
self._type = type
self._type_str = fblas_types.ROUTINE_TYPE_TO_TYPE_STR[type]
self._platform = platform
self._codegen = codegen
self._width = generator_definitions.DEFAULT_WIDTH
# Declare all the instance variables
self._input_channels = {}
self._output_channels = {}
self._incx = 1
self._incy = 1
self._tile_n_size = generator_definitions.DEFAULT_TILE_SIZE
self._tile_m_size = generator_definitions.DEFAULT_TILE_SIZE
self._order = fblas_types.FblasOrder.FblasOrderUndef
self._diag = fblas_types.FblasDiag.FblasDiagUndef
self._transposeA = fblas_types.FblasTranspose.FblasTransUndef
self._transposeB = fblas_types.FblasTranspose.FblasTransUndef
self._side = fblas_types.FblasSide.FblasSideUndef
self._uplo = fblas_types.FblasUpLo.FblasUpLoUndef
if type == fblas_types.RoutineType.Double:
self._uses_shift_registers = True
self._size_shift_registers = fblas_types.SHIFT_REGISTER_SIZES[(type, platform)]
else:
self._uses_shift_registers = False
self._has_2D_computational_tile = False
self._width_x = self._width = generator_definitions.DEFAULT_2D_CTILE_WIDTH
self._width_y = self._width = generator_definitions.DEFAULT_2D_CTILE_WIDTH
self._tile_size = generator_definitions.DEFAULT_TILE_SIZE
self._systolic = False
self._vect_size = 4
def __str__(self):
return """Routine {} implements {} with type {}
Width: {} Incx: {} Incy: {}""".format(self._user_name, self._blas_name, self._type, self._width, self._incx, self._incy)
#Getter/setter
@property
def blas_name(self):
return self._blas_name
@property
def user_name(self):
return self._user_name
@property
def type(self):
return self._type
@property
def type_str(self):
return self._type_str
@property
def uses_shift_registers(self):
return self._uses_shift_registers
@uses_shift_registers.setter
def uses_shift_registers(self, value: bool):
#if the routine uses shift register, set the size
self._uses_shift_registers = value
if value:
self._size_shift_registers = fblas_types.SHIFT_REGISTER_SIZES[(self.type, self._platform)]
@property
def size_shift_registers(self):
return self._size_shift_registers
@property
def width(self):
return self._width
@width.setter
def width(self, width: int):
self._width = width
@property
def incx(self):
return self._incx
@incx.setter
def incx(self, incx: int):
self._incx = incx
@property
def incy(self):
return self._incy
@incy.setter
def incy(self, incy: int):
self._incy = incy
@property
def tile_n_size(self):
return self._tile_n_size
@tile_n_size.setter
def tile_n_size(self, tile_size: int):
self._tile_n_size = tile_size
@property
def tile_m_size(self):
return self._tile_m_size
@tile_m_size.setter
def tile_m_size(self, tile_size: int):
self._tile_m_size = tile_size
@property
def tile_size(self):
return self._tile_size
@tile_size.setter
def tile_size(self, tile_size: int):
self._tile_size = tile_size
@property
def order(self):
return self._order
@order.setter
def order(self, order: fblas_types.FblasOrder):
self._order = order
@property
def uplo(self):
return self._uplo
@uplo.setter
def uplo(self, uplo: fblas_types.FblasUpLo):
self._uplo = uplo
@property
def transposedA(self):
return self._transposeA
@transposedA.setter
def transposedA(self, trans: fblas_types.FblasTranspose):
self._transposeA = trans
@property
def transposedB(self):
return self._transposeB
@transposedB.setter
def transposedB(self, trans: fblas_types.FblasTranspose):
self._transposeB = trans
@property
def input_channels(self):
return self._input_channels
@property
def output_channels(self):
return self._output_channels
@property
def tiles_A_order(self):
return self._tiles_A_order
@tiles_A_order.setter
def tiles_A_order(self, order: fblas_types.FblasOrder):
self._tiles_A_order = order
@property
def elements_A_order(self):
return self._elements_A_order
@elements_A_order.setter
def elements_A_order(self, order : fblas_types.FblasOrder):
self._elements_A_order = order
@property
def has_2D_computational_tile(self):
return self._has_2D_computational_tile
@has_2D_computational_tile.setter
def has_2D_computational_tile(self, value: bool):
self._has_2D_computational_tile = value
@property
def width_x(self):
return self._width_x
@width_x.setter
def width_x(self, width: int):
self._width_x = width
@property
def width_y(self):
return self._width_y
@width_y.setter
def width_y(self, width: int):
self._width_y = width
@property
def systolic(self):
return self._systolic
@systolic.setter
def systolic(self, value: bool):
self._systolic = value
@property
def vect_size(self):
return self._vect_size
@vect_size.setter
def vect_size(self, value: int):
self._vect_size = value
def are_tiles_A_rowstreamed(self):
"""
:return: True if the tiles of A are rowstreamed
"""
return self._tiles_A_order == fblas_types.FblasOrder.FblasRowMajor
def are_elements_A_rowstreamed(self):
"""
:return: True if the elements of A are rowstreamed
"""
return self._elements_A_order == fblas_types.FblasOrder.FblasRowMajor
def add_input_channel(self, routine_channel_name, user_name):
'''
Add the channel to the dictionary of input channels
If already present, it will be overwritten
'''
self._input_channels[routine_channel_name] = user_name
def add_output_channel(self, routine_channel_name, user_name):
'''
Add the channel to the dictionary of input channels
If already present, it will be overwritten
'''
self._output_channels[routine_channel_name] = user_name
| 27.810726 | 153 | 0.678993 | 1,119 | 8,816 | 5.014298 | 0.15639 | 0.049902 | 0.062377 | 0.035644 | 0.401889 | 0.260916 | 0.20139 | 0.145072 | 0.099804 | 0.042417 | 0 | 0.003627 | 0.249433 | 8,816 | 316 | 154 | 27.898734 | 0.84434 | 0.156874 | 0 | 0.122549 | 0 | 0 | 0.010022 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.240196 | false | 0 | 0.009804 | 0.127451 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c45c12b0519ee7503fa9f8fb44c7b896a2082873 | 210 | py | Python | tools/opt.py | hmtrii/tirg | e404020795bb46fb01b6bd82a2618f9370174012 | [
"Apache-2.0"
] | null | null | null | tools/opt.py | hmtrii/tirg | e404020795bb46fb01b6bd82a2618f9370174012 | [
"Apache-2.0"
] | null | null | null | tools/opt.py | hmtrii/tirg | e404020795bb46fb01b6bd82a2618f9370174012 | [
"Apache-2.0"
] | null | null | null | class Opt:
def __init__(self):
self.dataset = "fashion200k"
self.dataset_path = "./dataset/Fashion200k"
self.batch_size = 32
self.embed_dim = 512
self.hashing = False
self.retrieve_by_random = True | 26.25 | 45 | 0.728571 | 29 | 210 | 4.965517 | 0.689655 | 0.152778 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.161905 | 210 | 8 | 46 | 26.25 | 0.755682 | 0 | 0 | 0 | 0 | 0 | 0.151659 | 0.099526 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c489f0bb6aee13c77e0b4caf8c6ecbaa282336f5 | 539 | py | Python | services/neural/traindatabase.py | vitorecomp/hackaton-deep-learn | 962eac133ac92d56d8a55136773c2afe4da2e0b5 | [
"MIT"
] | null | null | null | services/neural/traindatabase.py | vitorecomp/hackaton-deep-learn | 962eac133ac92d56d8a55136773c2afe4da2e0b5 | [
"MIT"
] | null | null | null | services/neural/traindatabase.py | vitorecomp/hackaton-deep-learn | 962eac133ac92d56d8a55136773c2afe4da2e0b5 | [
"MIT"
] | null | null | null | from os import walk
import h5py
import numpy as np
from config.Database import Base
from config.Database import engine
from config.Database import Session
from models.Music import Music
from kmeans.kmeans import Kmeans
mypath = './dataset/datatr/'
def main():
files = []
# 2 - generate database schema
Base.metadata.create_all(engine)
# 3 - create a new session
session = Session()
musics = session.query(Music).all()
musics, distances = Kmeans.split(musics)
session.commit()
return
if __name__ == "__main__":
main()
| 15.852941 | 41 | 0.736549 | 74 | 539 | 5.243243 | 0.527027 | 0.07732 | 0.139175 | 0.185567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006696 | 0.168831 | 539 | 33 | 42 | 16.333333 | 0.859375 | 0.09833 | 0 | 0 | 0 | 0 | 0.05176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.421053 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
67161d52650aa2e5bc2f66de7b2914c066936052 | 362 | py | Python | after/config.py | mauvilsa/2021-config | 870fd832bda269a1be7bfba32dd327df9987e74a | [
"MIT"
] | 5 | 2021-12-25T15:16:16.000Z | 2022-03-19T09:04:39.000Z | after/config.py | ArjanCodes/2021-config | 7c2c3babb0fb66d69eac81590356fae512c5e784 | [
"MIT"
] | 1 | 2022-01-14T08:02:13.000Z | 2022-01-14T08:02:13.000Z | after/config.py | mauvilsa/2021-config | 870fd832bda269a1be7bfba32dd327df9987e74a | [
"MIT"
] | 1 | 2022-01-14T06:32:44.000Z | 2022-01-14T06:32:44.000Z | from dataclasses import dataclass
@dataclass
class Paths:
log: str
data: str
@dataclass
class Files:
train_data: str
train_labels: str
test_data: str
test_labels: str
@dataclass
class Params:
epoch_count: int
lr: float
batch_size: int
@dataclass
class MNISTConfig:
paths: Paths
files: Files
params: Params
| 12.066667 | 33 | 0.679558 | 46 | 362 | 5.217391 | 0.478261 | 0.233333 | 0.141667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.265193 | 362 | 29 | 34 | 12.482759 | 0.902256 | 0 | 0 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.047619 | 0 | 0.809524 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
671b9c9f7b2c7728391666847cc8f06a6c3abea1 | 468 | py | Python | Bunnies.py | fatih-iver/Intro-to-Computer-Science-with-Python | 7b8127681415dfd100a0e70fe8a672cec696bbb7 | [
"MIT"
] | null | null | null | Bunnies.py | fatih-iver/Intro-to-Computer-Science-with-Python | 7b8127681415dfd100a0e70fe8a672cec696bbb7 | [
"MIT"
] | null | null | null | Bunnies.py | fatih-iver/Intro-to-Computer-Science-with-Python | 7b8127681415dfd100a0e70fe8a672cec696bbb7 | [
"MIT"
] | null | null | null | # Define a procedure, fibonacci, that takes a natural number as its input, and
# returns the value of that fibonacci number.
# Two Base Cases:
# fibonacci(0) => 0
# fibonacci(1) => 1
# Recursive Case:
# n > 1 : fibonacci(n) => fibonacci(n-1) + fibonacci(n-2)
def fibonacci(n):
return n if n == 0 or n == 1 else fibonacci(n-1) + fibonacci(n-2)
print (fibonacci(0))
#>>> 0
print (fibonacci(1))
#>>> 1
print (fibonacci(15))
#>>> 610 | 24.631579 | 79 | 0.604701 | 71 | 468 | 3.985915 | 0.464789 | 0.212014 | 0.116608 | 0.127208 | 0.155477 | 0.155477 | 0 | 0 | 0 | 0 | 0 | 0.056338 | 0.241453 | 468 | 19 | 80 | 24.631579 | 0.740845 | 0.576923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.2 | 0.4 | 0.6 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 3 |
672b4006ae24930b53edb66efd8fb73b92773911 | 3,754 | py | Python | sa/profiles/ElectronR/KO01M/get_metrics.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 84 | 2017-10-22T11:01:39.000Z | 2022-02-27T03:43:48.000Z | sa/profiles/ElectronR/KO01M/get_metrics.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 22 | 2017-12-11T07:21:56.000Z | 2021-09-23T02:53:50.000Z | sa/profiles/ElectronR/KO01M/get_metrics.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 23 | 2017-12-06T06:59:52.000Z | 2022-02-24T00:02:25.000Z | # ---------------------------------------------------------------------
# ElectronR.KO01M.get_metrics
# ---------------------------------------------------------------------
# Copyright (C) 2007-2020 The NOC Project
# See LICENSE for details
# ---------------------------------------------------------------------
# NOC modules
from noc.sa.profiles.Generic.get_metrics import Script as GetMetricsScript, metrics
class Script(GetMetricsScript):
name = "ElectronR.KO01M.get_metrics"
@metrics(["Environment | Sensor Status"], volatile=False, access="S") # SNMP version
def get_sensor_status(self, metrics):
for metric in metrics:
value = 1
if metric.ifindex == 100:
continue
elif metric.ifindex == 140:
temp = self.snmp.get("1.3.6.1.4.1.35419.20.1.140.0", cached=True)
if -55 < temp < 600:
value = 0
elif metric.ifindex == 160:
impulse = self.snmp.get("1.3.6.1.4.1.35419.20.1.160.0", cached=True)
if impulse != 0:
value = 0
else:
res = self.snmp.get("1.3.6.1.4.1.35419.20.1.10%s.0" % metric.ifindex)
if res == 1:
value = 0
port = metric.labels[0].rsplit("::", 1)[-1]
self.set_metric(
id=("Environment | Sensor Status", metric.labels),
labels=[f"noc::sensor::{port}"],
value=value,
)
@metrics(["Environment | Temperature"], volatile=False, access="S") # SNMP version
def get_temperature(self, metrics):
for metric in metrics:
if metric.ifindex == 140:
value = self.snmp.get("1.3.6.1.4.1.35419.20.1.%s.0" % metric.ifindex, cached=True)
port = metric.labels[0].rsplit("::", 1)[-1]
self.set_metric(
id=("Environment | Temperature", metric.labels),
labels=[f"noc::module::{port}", f"noc::sensor::{port}"],
value=value,
multi=True,
)
@metrics(["Environment | Voltage"], volatile=False, access="S") # SNMP version
def get_voltage(self, metrics):
for metric in metrics:
value = self.snmp.get("1.3.6.1.4.1.35419.20.1.%s.0" % metric.ifindex)
port = metric.labels[0].rsplit("::", 1)[-1]
self.set_metric(
id=("Environment | Voltage", metric.labels),
labels=[f"noc::module::{port}", f"noc::sensor::{port}"],
value=value,
multi=True,
)
@metrics(["Environment | Pulse"], volatile=False, access="S") # SNMP version
def get_pulse(self, metrics):
for metric in metrics:
if metric.ifindex == 160:
value = self.snmp.get("1.3.6.1.4.1.35419.20.1.%s.0" % metric.ifindex, cached=True)
port = metric.labels[0].rsplit("::", 1)[-1]
self.set_metric(
id=("Environment | Pulse", metric.labels),
labels=[f"noc::sensor::{port}"],
value=value,
)
@metrics(["Environment | Power | Input | Status"], volatile=False, access="S") # SNMP version
def get_power_input_status(self, metrics):
for metric in metrics:
value = self.snmp.get("1.3.6.1.4.1.35419.20.1.10%s.0" % metric.ifindex, cached=True)
port = metric.labels[0].rsplit("::", 1)[-1]
self.set_metric(
id=("Environment | Power | Input | Status", metric.labels),
labels=[f"noc::sensor::{port}"],
value=0 if value == 1 else 1,
)
| 43.149425 | 98 | 0.485615 | 430 | 3,754 | 4.202326 | 0.176744 | 0.071942 | 0.042612 | 0.046486 | 0.710017 | 0.710017 | 0.710017 | 0.710017 | 0.623132 | 0.499723 | 0 | 0.066615 | 0.316196 | 3,754 | 86 | 99 | 43.651163 | 0.63732 | 0.100693 | 0 | 0.43662 | 0 | 0.098592 | 0.186143 | 0.066012 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070423 | false | 0 | 0.014085 | 0 | 0.112676 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
67366bf1792d0f436d2ce6181f326bfb3e3aea15 | 4,035 | py | Python | ubirch/linux/bleManager.py | ubirch/ubirch-ble-tool | 1399d018957e9a8424071296a71431c8ffa27e6f | [
"Apache-2.0"
] | 4 | 2018-07-20T16:35:52.000Z | 2020-11-12T13:38:58.000Z | ubirch/linux/bleManager.py | ubirch/ubirch-ble-tool | 1399d018957e9a8424071296a71431c8ffa27e6f | [
"Apache-2.0"
] | 1 | 2021-04-03T13:37:40.000Z | 2021-04-03T13:37:40.000Z | ubirch/linux/bleManager.py | ubirch/ubirch-ble-tool | 1399d018957e9a8424071296a71431c8ffa27e6f | [
"Apache-2.0"
] | null | null | null | from bleSuite import bleConnectionManager, bleServiceManager
from bluepy.btle import Scanner
from ubirch.linux.bleServiceManager import BLEServiceManager
class BLEManager(object):
""" BLE network manager """
def __init__(self, address, adapter, addressType, securityLevel, createRequester, psm=0, mtu=0):
""" Create an instance of BLE Manager """
self.address = address
self.cm = bleConnectionManager.BLEConnectionManager(address, adapter, addressType, securityLevel)
self.sm = BLEServiceManager(self.cm, self.address)
def connectDevice(self):
""" conect tot he BLE device
take manager object or mac address as a input variable
"""
self.cm.connect()
def disconnectDevice(self):
self.cm.disconnect()
def discoverDevice(self, name, timeout=5):
sm = BLEScanDevices()
deviceList = sm.scan(timeout)
for device in deviceList:
for values in device.getScanData():
if values[2].rstrip('\x00') == name:
return str(device.addr)
raise Exception("NO DEVICE FOUND")
def discoverServices(self):
return bleServiceManager.bleServiceDiscovery(self.address, self.cm)
def discoverCharacteristics(self):
ignoreUUID = ["00001800-0000-1000-8000-00805f9b34fb", "00001801-0000-1000-8000-00805f9b34fb"]
devServices = self.discoverServices()
devCharList = []
for service in devServices.services:
if not (service.uuid in ignoreUUID):
for characteristics in service.characteristics:
devCharList.append(characteristics.uuid)
return devCharList
raise Exception("No Services Found")
def write(self, handle, data):
bleServiceManager.bleServiceWriteToHandle(self.cm, handle, data)
def read(self, handle):
# TODO add a function getHandlebyUUID toget handle using uuid
# helps to read data on both mac n linux with ease
return bleServiceManager.bleServiceReadByHandle(self.cm, handle)
def isConnected(self):
return self.cm.isConnected()
# Services and Characteristics
def bleServiceWriteToHandle(self, handle, data):
return bleServiceManager.bleServiceWriteToHandle(self.cm, handle, data)
def bleServiceReadByHandle(self, handle):
return bleServiceManager.bleServiceReadByHandle(self.cm, handle)
def bleServiceReadByUUID(self, uuid):
return bleServiceManager.bleServiceReadByUUID(self.cm, uuid)
def bleDiscoverServices(self):
return bleServiceManager.bleServiceDiscovery(self.address, self.cm)
def showServices(self):
bledevice = bleServiceManager.bleServiceDiscovery(self.address, self.cm)
bledevice.printDeviceStructure()
def bleGetHandlefromUUID(self, uuid):
bledevice = bleServiceManager.bleServiceDiscovery(self.address, self.cm)
for service in bledevice.services:
# print service.uuid
for characteristic in service.characteristics:
# print characteristic.uuid
if uuid == characteristic.uuid:
return characteristic.valueHandle
return -1
def bleServiceWriteByUUID(self, uuid, data):
handle = self.bleGetHandlefromUUID(uuid)
return self.bleServiceWriteToHandle(handle, data)
class BLEScanDevices(object):
def __init__(self):
self.sm = Scanner()
def scan(self, timeOut=10):
return self.sm.scan(timeOut)
def stopScan(self):
pass
def isScanning(self):
pass
def getDeviceAddress(self, deviceName, timeOut=10):
"""return a tuple containing DeviceName and DeiceAddress"""
deviceList = self.scan(timeOut)
for device in deviceList:
for values in device.getScanData():
if values[2].rstrip('\x00') == deviceName:
return str(device.addr)
raise Exception("NO DEVICE FOUND")
| 34.487179 | 105 | 0.666419 | 399 | 4,035 | 6.719298 | 0.323308 | 0.031332 | 0.024245 | 0.070123 | 0.276762 | 0.276762 | 0.276762 | 0.141738 | 0.141738 | 0.058187 | 0 | 0.023125 | 0.249814 | 4,035 | 116 | 106 | 34.784483 | 0.86257 | 0.092441 | 0 | 0.213333 | 0 | 0 | 0.035151 | 0.019928 | 0 | 0 | 0 | 0.008621 | 0 | 1 | 0.28 | false | 0.026667 | 0.04 | 0.106667 | 0.533333 | 0.013333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
6738c6913f593e8f3489b3d849753c160556f231 | 480 | py | Python | storagetest/pkgs/pts/__init__.py | liufeng-elva/storage-test2 | 5364cc00dbe71b106f1bb740bf391e6124788bf4 | [
"MIT"
] | null | null | null | storagetest/pkgs/pts/__init__.py | liufeng-elva/storage-test2 | 5364cc00dbe71b106f1bb740bf391e6124788bf4 | [
"MIT"
] | null | null | null | storagetest/pkgs/pts/__init__.py | liufeng-elva/storage-test2 | 5364cc00dbe71b106f1bb740bf391e6124788bf4 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: UTF-8 -*-
"""
@file : __init__.py.py
@Time : 2020/11/12 13:37
@Author: Tao.Xu
@Email : tao.xu2008@outlook.com
"""
"""
phoronix-test-suite: Main for Performance Test
===================
https://github.com/phoronix-test-suite/phoronix-test-suite
The Phoronix Test Suite is the most comprehensive testing and
benchmarking platform available for Linux, Solaris, macOS, Windows,
and BSD operating systems.
"""
if __name__ == '__main__':
pass
| 22.857143 | 68 | 0.683333 | 66 | 480 | 4.787879 | 0.742424 | 0.151899 | 0.21519 | 0.126582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041162 | 0.139583 | 480 | 20 | 69 | 24 | 0.723971 | 0.283333 | 0 | 0 | 0 | 0 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
673ab82d9ec7dbd59a48086985188478a17a2fc5 | 756 | py | Python | contrib/analysis_server/src/analysis_server/__init__.py | Kenneth-T-Moore/OpenMDAO-Framework | 76e0ebbd6f424a03b547ff7b6039dea73d8d44dc | [
"Apache-2.0"
] | 3 | 2015-06-02T00:36:28.000Z | 2018-11-03T00:35:21.000Z | contrib/analysis_server/src/analysis_server/__init__.py | JustinSGray/OpenMDAO-Framework | 7ebd7fda0b10fbe8a86ae938dc4f135396dd9759 | [
"Apache-2.0"
] | null | null | null | contrib/analysis_server/src/analysis_server/__init__.py | JustinSGray/OpenMDAO-Framework | 7ebd7fda0b10fbe8a86ae938dc4f135396dd9759 | [
"Apache-2.0"
] | 1 | 2020-07-15T02:45:54.000Z | 2020-07-15T02:45:54.000Z | """
Support for interacting with ModelCenter via the AnalysisServer protocol.
Client-mode access to an AnalysisServer is provided by the 'client', 'factory',
and 'proxy' modules. Server-mode access by ModelCenter is provided by the
'server' and 'wrapper' modules.
An extension to the protocol allows 'eggs' to pe 'published': the egg is sent
to the server and made part of the server's set of supported components.
"""
from __future__ import absolute_import
from .client import Client
from .factory import ASFactory
from .server import Server, start_server, stop_server, DEFAULT_PORT
from .stream import Stream
from .units import have_translation, get_translation, set_translation
from .publish import publish_class, publish_object, publish_egg
| 36 | 79 | 0.797619 | 110 | 756 | 5.354545 | 0.490909 | 0.04584 | 0.040747 | 0.050934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145503 | 756 | 20 | 80 | 37.8 | 0.911765 | 0.544974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
67ecb4f05375d9a4dfbfec0d8b5a28b3678e0e4e | 172 | py | Python | docs/examples/timer.py | vlcinsky/nameko | 88d7e5211de4fcc1c34cd7f84d7c77f0619c5f5d | [
"Apache-2.0"
] | 3,425 | 2016-11-10T17:12:42.000Z | 2022-03-31T19:07:49.000Z | docs/examples/timer.py | vlcinsky/nameko | 88d7e5211de4fcc1c34cd7f84d7c77f0619c5f5d | [
"Apache-2.0"
] | 371 | 2020-03-04T21:51:56.000Z | 2022-03-31T20:59:11.000Z | docs/examples/timer.py | vlcinsky/nameko | 88d7e5211de4fcc1c34cd7f84d7c77f0619c5f5d | [
"Apache-2.0"
] | 420 | 2016-11-17T05:46:42.000Z | 2022-03-23T12:36:06.000Z | from nameko.timer import timer
class Service:
name ="service"
@timer(interval=1)
def ping(self):
# method executed every second
print("pong")
| 17.2 | 38 | 0.627907 | 21 | 172 | 5.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008 | 0.273256 | 172 | 9 | 39 | 19.111111 | 0.856 | 0.162791 | 0 | 0 | 0 | 0 | 0.077465 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.666667 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
67f86eeb953024e2463d4d73c584b0e83d0b4555 | 12,761 | py | Python | wykop/api/client.py | selfisekai/wykop-sdk-reborn | 7f17c5b2a3d282b5aaf72475a0f58ba66d5c5c5d | [
"MIT"
] | null | null | null | wykop/api/client.py | selfisekai/wykop-sdk-reborn | 7f17c5b2a3d282b5aaf72475a0f58ba66d5c5c5d | [
"MIT"
] | null | null | null | wykop/api/client.py | selfisekai/wykop-sdk-reborn | 7f17c5b2a3d282b5aaf72475a0f58ba66d5c5c5d | [
"MIT"
] | null | null | null | import logging
from typing import Dict, List
from wykop.api.api_const import PAGE_NAMED_ARG, BODY_NAMED_ARG, FILE_POST_NAME
from wykop.core.credentials import Credentials
from wykop.core.requestor import Requestor
log = logging.getLogger(__name__)
class WykopAPI:
"""Wykop API version 2."""
def __init__(self, appkey, secretkey, account_key=None,
output='', response_format='json'):
self.requestor = Requestor(
credentials=Credentials(appkey, secretkey, account_key),
output=output,
response_format=response_format
)
def request(self, rtype, rmethod=None,
named_params=None, api_params=None, post_params=None, file_params=None):
return self.requestor.request(rtype, rmethod=rmethod,
named_params=named_params,
api_params=api_params,
post_params=post_params,
file_params=file_params)
def authenticate(self, account_key=None):
self.requestor.authenticate(account_key)
# entries
def entries_stream(self, page=1, first_id=None):
named_params = self \
.__with_page(page) \
.update(dict(firstId=first_id))
return self.request('Entries', 'Stream', named_params=named_params)
def entries_hot(self, page=1, period=12):
assert period in [6, 12, 24]
named_params = self \
.__with_page(page) \
.update(dict(period=period))
return self.request('Entries', 'Hot',
named_params=named_params)
def entries_active(self, page=1):
return self.request('Entries', 'Active',
named_params=self.__with_page(page))
def entries_observed(self, page=1):
return self.request('Entries', 'Observed',
named_params=self.__with_page(page))
def entry(self, entry_id):
return self.request('Entries', 'Entry',
api_params=self.__api_param(entry_id))
def entry_add(self, body: str, file=None, file_url: str = None, is_adult_media: bool = False):
return self.request('Entries', 'Add',
post_params=self.content_post_params(body, file_url, is_adult_media),
file_params=self.__with_file(file))
def entry_edit(self, entry_id: str, body: str, file=None, file_url: str = None, is_adult_media: bool = False):
return self.request('Entries', 'Edit',
post_params=self.content_post_params(body, file_url, is_adult_media),
api_params=self.__api_param(entry_id),
file_params=self.__with_file(file))
def entry_vote_up(self, entry_id: str):
return self.request('Entries', 'VoteUp',
api_params=self.__api_param(entry_id))
def entry_vote_remove(self, entry_id: str):
return self.request('Entries', 'VoteRemove',
api_params=self.__api_param(entry_id))
def entry_upvoters(self, entry_id: str):
return self.request('Entries', 'Upvoters',
api_params=self.__api_param(entry_id))
def entry_delete(self, entry_id: str):
return self.request('Entries', 'Delete',
api_params=self.__api_param(entry_id))
def entry_favorite_toggle(self, entry_id: str):
return self.request('Entries', 'Favorite',
api_params=self.__api_param(entry_id))
def entry_survey_vote(self, entry_id: str, answer_id: str):
return self.request('Entries', 'SurveyVote',
api_params=[entry_id, answer_id])
# comments
def entry_comment(self, comment_id: str):
return self.request('Entries', 'Comment',
api_params=self.__api_param(comment_id))
def entry_comment_add(self, entry_id: str, body: str, file=None, file_url: str = None,
is_adult_media: bool = False):
return self.request('Entries', 'CommentAdd',
post_params=self.content_post_params(body, file_url, is_adult_media),
api_params=self.__api_param(entry_id),
file_params=self.__with_file(file))
def entry_comment_edit(self, comment_id: str, body: str, file=None, file_url: str = None,
is_adult_media: bool = False):
return self.request('Entries', 'CommentEdit',
post_params=self.content_post_params(body, file_url, is_adult_media),
api_params=self.__api_param(comment_id),
file_params=self.__with_file(file))
def entry_comment_delete(self, comment_id: str):
return self.request('Entries', 'CommentDelete',
api_params=self.__api_param(comment_id))
def entry_comment_vote_up(self, comment_id: str):
return self.request('Entries', 'CommentVoteUp',
api_params=self.__api_param(comment_id))
def entry_comment_vote_remote(self, comment_id: str):
return self.request('Entries', 'CommentVoteRemove',
api_params=self.__api_param(comment_id))
def entry_comment_observed(self, page: int = 1):
return self.request('Entries', 'ObservedComments',
named_params=self.__with_page(page))
def entry_comment_favorite_toggle(self, entry_id: str):
return self.request('Entries', 'CommentFavorite',
api_params=self.__api_param(entry_id))
# links
def links_promoted(self, page=1):
return self.request('links', 'promoted',
named_params=self.__with_page(page))
# mywykop
# profiles
def observe_profile(self, username):
named_params = {
'observe': username,
}
return self.request('profiles', named_params=named_params)
def unobserve_profile(self, username):
named_params = {
'unobserve': username,
}
return self.request('profiles', named_params=named_params)
def block_profile(self, username):
named_params = {
'block': username,
}
return self.request('profiles', named_params=named_params)
def unblock_profile(self, username):
named_params = {
'unblock': username,
}
return self.request('profiles', named_params=named_params)
# hits
def hits_popular(self):
return self.request('hits', 'popular')
# pm
def conversations_list(self):
return self.request('pm', 'conversationsList')
def conversation(self, receiver: str):
return self.request('pm', 'Conversation',
api_params=self.__api_param(receiver))
def send_message(self, receiver: str, message: str):
return self.request('pm', 'SendMessage',
post_params=self.__with_body(message),
api_params=self.__api_param(receiver))
def delete_conversation(self, receiver: str):
return self.request('pm', 'DeleteConversation',
api_params=self.__api_param(receiver))
# notifications
def notifications_direct(self, page=1):
return self.request('notifications',
named_params=self.__with_page(page))
def notifications_direct_count(self):
return self.request('notifications', 'Count')
def notifications_hashtags_notifications(self, page=1):
return self.request('notifications', 'hashtags',
named_params=self.__with_page(page))
def notifications_hashtags_count(self):
return self.request('notifications', 'hashtagscount')
def notifications_all(self, page=1):
return self.request('notifications', 'total',
named_params=self.__with_page(page))
def notifications_all_count(self):
return self.request('notifications', 'totalcount')
def notification_mark_all_as_read(self):
return self.request('Notifications', 'ReadAllNotifications')
def notifications_mark_all_direct_as_read(self):
return self.request('Notifications', 'ReadDirectedNotifications')
def notifications_mark_all_hashtag_as_read(self):
return self.request('Notifications', 'ReadHashTagsNotifications')
def notification_mark_as_read(self, notification_id):
return self.request('Notifications', 'MarkAsRead',
api_params=self.__api_param(notification_id))
# search
def search_links(self, page=1, query=None, when=None, votes=None, from_date=None, to_date=None, what=None,
sort=None):
assert len(query) > 2 if query else True
assert when in ["all", "today", "yesterday", "week", "month", "range"] if when else True
assert what in ["all", "promoted", "archived", "duplicates"] if when else True
assert sort in ["best", "diggs", "comments", "new"] if when else True
post_params = {
'q': query,
'when': when,
'votes': votes,
'from': from_date,
'to': to_date,
'what': what,
'sort': sort
}
return self.request('Search', 'Links',
post_params=post_params,
named_params=self.__with_page(page))
def search_entries(self, page=1, query=None, when=None, votes=None, from_date=None, to_date=None):
assert len(query) > 2 if query else True
assert when in ["all", "today", "yesterday", "week", "month", "range"] if when else True
post_params = {
'q': query,
'when': when,
'votes': votes,
'from': from_date,
'to': to_date
}
return self.request('Search', 'Entries',
post_params=post_params,
named_params=self.__with_page(page))
def search_profiles(self, query):
assert len(query) > 2 if query else True
post_params = {
'q': query,
}
return self.request('Search', 'Profiles',
post_params=post_params)
# tags
def tag(self, tag, page=1):
return self.request('Tags', 'Index',
named_params=dict(page=page),
api_params=self.__api_param(tag))
def tag_links(self, tag, page=1):
return self.request('Tags', 'Links',
named_params=self.__with_page(page),
api_params=self.__api_param(tag))
def tag_entries(self, tag, page=1):
return self.request('Tags', 'Entries',
named_params=self.__with_page(page),
api_params=self.__api_param(tag))
def observe_tag(self, tag):
return self.request('Tags', 'Observe',
api_params=self.__api_param(tag))
def unobserve_tag(self, tag):
return self.request('Tags', 'Unobserve',
api_params=self.__api_param(tag))
def enable_tags_notifications(self, tag):
return self.request('Tags', 'Notify',
api_params=self.__api_param(tag))
def disable_tags_notifications(self, tag):
return self.request('Tags', 'Dontnotify',
api_params=self.__api_param(tag))
def block_tag(self, tag):
return self.request('Tags', 'Block',
api_params=self.__api_param(tag))
def unblock_tag(self, tag):
return self.request('Tags', 'Unblock',
api_params=self.__api_param(tag))
@staticmethod
def __api_param(param: str) -> List[str]:
return [str(param)]
@staticmethod
def __with_page(page: int) -> Dict[str, int]:
return {PAGE_NAMED_ARG: page}
@staticmethod
def __with_body(body: str) -> Dict[str, str]:
return {BODY_NAMED_ARG: body}
@staticmethod
def __with_file(file: str) -> Dict[str, str]:
return {FILE_POST_NAME: file} if file else None
@staticmethod
def content_post_params(body: str, file_url: str, is_adult_media: bool):
post_params = {
'adultmedia': is_adult_media,
'body': body,
'embed': file_url
}
return post_params
| 37.754438 | 114 | 0.587728 | 1,424 | 12,761 | 4.963483 | 0.116573 | 0.076401 | 0.127476 | 0.061121 | 0.623797 | 0.580787 | 0.533814 | 0.43605 | 0.304471 | 0.266129 | 0 | 0.002825 | 0.306559 | 12,761 | 337 | 115 | 37.866469 | 0.795909 | 0.007445 | 0 | 0.336 | 0 | 0 | 0.082444 | 0.003952 | 0 | 0 | 0 | 0 | 0.032 | 1 | 0.244 | false | 0 | 0.02 | 0.196 | 0.504 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
db1c1956b75c3a0483a601da0add4f5327ce2ad0 | 364 | py | Python | utils/image_utils.py | novicasarenac/car-racing-rl | 5bb3b2c47fb6ceda3e8f2c149485652da5a079ba | [
"MIT"
] | 10 | 2019-08-08T03:17:39.000Z | 2021-12-15T08:43:29.000Z | utils/image_utils.py | novicasarenac/car-racing-rl | 5bb3b2c47fb6ceda3e8f2c149485652da5a079ba | [
"MIT"
] | 7 | 2019-11-29T04:00:22.000Z | 2022-03-11T23:38:20.000Z | utils/image_utils.py | novicasarenac/car-racing-rl | 5bb3b2c47fb6ceda3e8f2c149485652da5a079ba | [
"MIT"
] | 4 | 2019-11-28T10:14:48.000Z | 2020-04-08T08:10:37.000Z | import PIL
import numpy as np
def to_grayscale(img):
return np.dot(img, [0.299, 0.587, 0.144])
def zero_center(img):
return img - 127.0
def crop(img, bottom=12, left=6, right=6):
height, width = img.shape
return img[0: height - bottom, left: width - right]
def save(img, path):
pil_img = PIL.Image.fromarray(img)
pil_img.save(path)
| 17.333333 | 55 | 0.653846 | 63 | 364 | 3.714286 | 0.492063 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072917 | 0.208791 | 364 | 20 | 56 | 18.2 | 0.739583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.166667 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
db299a97d65e80dbbfa712b50525b811276c7bff | 4,424 | py | Python | test/unit/vint/ast/plugin/scope_plugin/stub_node.py | mosheavni/vint | 9078dd626415cfe37ddaf03032e714bbaca8b336 | [
"MIT"
] | 538 | 2015-01-03T18:54:53.000Z | 2020-01-11T01:34:51.000Z | test/unit/vint/ast/plugin/scope_plugin/stub_node.py | mosheavni/vint | 9078dd626415cfe37ddaf03032e714bbaca8b336 | [
"MIT"
] | 235 | 2015-01-01T06:20:01.000Z | 2020-01-17T11:32:39.000Z | test/unit/vint/ast/plugin/scope_plugin/stub_node.py | mosheavni/vint | 9078dd626415cfe37ddaf03032e714bbaca8b336 | [
"MIT"
] | 43 | 2015-01-23T16:59:49.000Z | 2019-12-27T10:56:12.000Z | from vint.ast.node_type import NodeType
from vint.ast.plugin.scope_plugin.identifier_attribute import (
IDENTIFIER_ATTRIBUTE,
IDENTIFIER_ATTRIBUTE_DYNAMIC_FLAG,
IDENTIFIER_ATTRIBUTE_DECLARATION_FLAG,
IDENTIFIER_ATTRIBUTE_MEMBER_FLAG,
IDENTIFIER_ATTRIBUTE_FUNCTION_FLAG,
IDENTIFIER_ATTRIBUTE_AUTOLOAD_FLAG,
IDENTIFIER_ATTRIBUTE_FUNCTION_ARGUMENT_FLAG,
IDENTIFIER_ATTRIBUTE_LAMBDA_STRING_CONTEXT,
)
def create_id(id_value, is_declarative=True, is_function=False, is_autoload=False,
is_declarative_parameter=False, is_on_str_expr_context=False):
return {
'type': NodeType.IDENTIFIER.value,
'value': id_value,
IDENTIFIER_ATTRIBUTE: {
IDENTIFIER_ATTRIBUTE_DECLARATION_FLAG: is_declarative,
IDENTIFIER_ATTRIBUTE_DYNAMIC_FLAG: False,
IDENTIFIER_ATTRIBUTE_MEMBER_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_FLAG: is_function,
IDENTIFIER_ATTRIBUTE_AUTOLOAD_FLAG: is_autoload,
IDENTIFIER_ATTRIBUTE_FUNCTION_ARGUMENT_FLAG: is_declarative_parameter,
IDENTIFIER_ATTRIBUTE_LAMBDA_STRING_CONTEXT: is_on_str_expr_context,
},
}
def create_env(env_value):
return {
'type': NodeType.ENV.value,
'value': env_value,
IDENTIFIER_ATTRIBUTE: {
IDENTIFIER_ATTRIBUTE_DECLARATION_FLAG: True,
IDENTIFIER_ATTRIBUTE_DYNAMIC_FLAG: False,
IDENTIFIER_ATTRIBUTE_MEMBER_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_FLAG: False,
IDENTIFIER_ATTRIBUTE_AUTOLOAD_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_ARGUMENT_FLAG: False,
IDENTIFIER_ATTRIBUTE_LAMBDA_STRING_CONTEXT: False,
},
}
def create_option(opt_value):
return {
'type': NodeType.OPTION.value,
'value': opt_value,
IDENTIFIER_ATTRIBUTE: {
IDENTIFIER_ATTRIBUTE_DECLARATION_FLAG: True,
IDENTIFIER_ATTRIBUTE_DYNAMIC_FLAG: False,
IDENTIFIER_ATTRIBUTE_MEMBER_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_FLAG: False,
IDENTIFIER_ATTRIBUTE_AUTOLOAD_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_ARGUMENT_FLAG: False,
IDENTIFIER_ATTRIBUTE_LAMBDA_STRING_CONTEXT: False,
},
}
def create_reg(reg_value):
return {
'type': NodeType.REG.value,
'value': reg_value,
IDENTIFIER_ATTRIBUTE: {
IDENTIFIER_ATTRIBUTE_DECLARATION_FLAG: True,
IDENTIFIER_ATTRIBUTE_DYNAMIC_FLAG: False,
IDENTIFIER_ATTRIBUTE_MEMBER_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_FLAG: False,
IDENTIFIER_ATTRIBUTE_AUTOLOAD_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_ARGUMENT_FLAG: False,
IDENTIFIER_ATTRIBUTE_LAMBDA_STRING_CONTEXT: False,
},
}
def create_curlyname(is_declarative=True):
""" Create a node as a `my_{'var'}`
"""
return {
'type': NodeType.CURLYNAME.value,
'value': [
{
'type': NodeType.CURLYNAMEPART.value,
'value': 'my_',
},
{
'type': NodeType.CURLYNAMEEXPR.value,
'value': {
'type': NodeType.CURLYNAMEEXPR.value,
'value': 'var',
},
}
],
IDENTIFIER_ATTRIBUTE: {
IDENTIFIER_ATTRIBUTE_DECLARATION_FLAG: is_declarative,
IDENTIFIER_ATTRIBUTE_DYNAMIC_FLAG: True,
IDENTIFIER_ATTRIBUTE_MEMBER_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_FLAG: False,
IDENTIFIER_ATTRIBUTE_AUTOLOAD_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_ARGUMENT_FLAG: False,
IDENTIFIER_ATTRIBUTE_LAMBDA_STRING_CONTEXT: False,
},
}
def create_subscript_member(is_declarative=True):
return {
'type': NodeType.IDENTIFIER.value,
'value': 'member',
IDENTIFIER_ATTRIBUTE: {
IDENTIFIER_ATTRIBUTE_DECLARATION_FLAG: is_declarative,
IDENTIFIER_ATTRIBUTE_DYNAMIC_FLAG: False,
IDENTIFIER_ATTRIBUTE_MEMBER_FLAG: True,
IDENTIFIER_ATTRIBUTE_FUNCTION_FLAG: False,
IDENTIFIER_ATTRIBUTE_AUTOLOAD_FLAG: False,
IDENTIFIER_ATTRIBUTE_FUNCTION_ARGUMENT_FLAG: False,
IDENTIFIER_ATTRIBUTE_LAMBDA_STRING_CONTEXT: False,
},
}
| 35.96748 | 82 | 0.667043 | 414 | 4,424 | 6.620773 | 0.111111 | 0.395111 | 0.173294 | 0.255381 | 0.726377 | 0.631521 | 0.603794 | 0.60197 | 0.60197 | 0.60197 | 0 | 0 | 0.268987 | 4,424 | 122 | 83 | 36.262295 | 0.847557 | 0.007007 | 0 | 0.481481 | 0 | 0 | 0.021228 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.018519 | 0.046296 | 0.12963 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
db3364ee622377b95d22e40cf02ce787e7812d16 | 323 | py | Python | Funcoes/ex106-sistemaInterativoAjuda.py | ascaniopy/python | 6d8892b7b9ff803b7422a61e68a383ec6ac7d62d | [
"MIT"
] | null | null | null | Funcoes/ex106-sistemaInterativoAjuda.py | ascaniopy/python | 6d8892b7b9ff803b7422a61e68a383ec6ac7d62d | [
"MIT"
] | null | null | null | Funcoes/ex106-sistemaInterativoAjuda.py | ascaniopy/python | 6d8892b7b9ff803b7422a61e68a383ec6ac7d62d | [
"MIT"
] | null | null | null | from time import sleep
c = ('\033[m', # 0 - Sem cores
'\033[0;30;41m', # 1 - Vermelho
'\033[0;30;42m', # 2 - Verde
'\033[0;30;43m', # 3 - Amarelo
'\033[0;30;44m', # 4 - Azul
'\033[0;30;45m', # 5 - Roxo
'\033[0;30m' # 6 - Branco
)
#Programa principal
| 19 | 40 | 0.439628 | 47 | 323 | 3.021277 | 0.659574 | 0.169014 | 0.211268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.273171 | 0.365325 | 323 | 16 | 41 | 20.1875 | 0.419512 | 0.294118 | 0 | 0 | 0 | 0 | 0.369863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e1e8ce55d278ecec5ff0a778a7af4a2bbb524f3a | 1,274 | py | Python | src/robotide/context/coreplugins.py | veryl-technologies/t24-tests-ide | 16cd803895916a785c0e1fec3f71f9388c21edc9 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2019-06-27T08:48:24.000Z | 2019-06-27T08:48:24.000Z | src/robotide/context/coreplugins.py | veryl-technologies/t24-tests-ide | 16cd803895916a785c0e1fec3f71f9388c21edc9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/robotide/context/coreplugins.py | veryl-technologies/t24-tests-ide | 16cd803895916a785c0e1fec3f71f9388c21edc9 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # Copyright 2008-2012 Nokia Siemens Networks Oyj
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def get_core_plugins():
from robotide.run import RunAnything
from robotide.recentfiles import RecentFilesPlugin
from robotide.ui.preview import PreviewPlugin
from robotide.ui.keywordsearch import KeywordSearch
from robotide.editor import EditorPlugin
from robotide.editor.texteditor import TextEditorPlugin
from robotide.log import LogPlugin
from robotide.searchtests.searchtests import TestSearchPlugin
from robotide.spec.specimporter import SpecImporterPlugin
return [RunAnything, RecentFilesPlugin, PreviewPlugin, SpecImporterPlugin,
EditorPlugin, TextEditorPlugin, KeywordSearch, LogPlugin, TestSearchPlugin]
| 42.466667 | 87 | 0.78022 | 157 | 1,274 | 6.318471 | 0.592357 | 0.108871 | 0.02621 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011331 | 0.16876 | 1,274 | 29 | 88 | 43.931034 | 0.925401 | 0.452904 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | true | 0 | 0.833333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
c01227c807be8c1f87a3e23c71237c6860b77b30 | 708 | py | Python | src/util/utils.py | 5agado/intro-ai | dfb7cd636ad8f8ac2d88053f9d3f279730b8608a | [
"Apache-2.0"
] | 3 | 2015-11-07T14:45:20.000Z | 2018-01-27T13:06:25.000Z | src/util/utils.py | 5agado/intro-ai | dfb7cd636ad8f8ac2d88053f9d3f279730b8608a | [
"Apache-2.0"
] | null | null | null | src/util/utils.py | 5agado/intro-ai | dfb7cd636ad8f8ac2d88053f9d3f279730b8608a | [
"Apache-2.0"
] | null | null | null | import os
import math
def dotProduct(v1, v2):
return sum(x * y for x, y in zip(v1, v2))
def sigmoid(x):
return 1.0 / (1.0 + math.exp(-x))
def getResourcesPath():
return os.path.abspath(os.path.join(os.path.dirname( __file__ ), os.pardir, 'resources'))
def readTrainModel(filePath, numOutputs = 1):
f = open(filePath, 'r')
res = []
for line in f:
sLine = list(map(float, line.strip().split(" ")))
res.append(((sLine[:-numOutputs]), sLine[-numOutputs:]))
return res
def readMatrix(filePath):
f = open(filePath, 'r')
res = []
for line in f:
res.append(list(map(float, line.strip().split(" "))))
return res | 27.230769 | 94 | 0.577684 | 97 | 708 | 4.175258 | 0.43299 | 0.044444 | 0.064198 | 0.069136 | 0.261728 | 0.261728 | 0.133333 | 0.133333 | 0.133333 | 0 | 0 | 0.017078 | 0.25565 | 708 | 26 | 95 | 27.230769 | 0.751423 | 0 | 0 | 0.380952 | 0 | 0 | 0.019006 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.095238 | 0.142857 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
c01b9112e0f0afc9d0edfc412d08f777b3d1b9d7 | 137 | py | Python | _6_EXERCISE_BASIC SYNTAX, CONDITIONAL STATEMENTS AND LOOPS/_7_Maximum_Multiple.py | YordanPetrovDS/Python_Fundamentals | 81163054cd3ac780697eaa43f099cc455f253a0c | [
"MIT"
] | null | null | null | _6_EXERCISE_BASIC SYNTAX, CONDITIONAL STATEMENTS AND LOOPS/_7_Maximum_Multiple.py | YordanPetrovDS/Python_Fundamentals | 81163054cd3ac780697eaa43f099cc455f253a0c | [
"MIT"
] | null | null | null | _6_EXERCISE_BASIC SYNTAX, CONDITIONAL STATEMENTS AND LOOPS/_7_Maximum_Multiple.py | YordanPetrovDS/Python_Fundamentals | 81163054cd3ac780697eaa43f099cc455f253a0c | [
"MIT"
] | null | null | null | divisor = int(input())
bound = int(input())
for num in range(bound, 0, -1):
if num % divisor == 0:
print(num)
break
| 17.125 | 31 | 0.540146 | 20 | 137 | 3.7 | 0.65 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.29927 | 137 | 7 | 32 | 19.571429 | 0.739583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
c04935b8a935560d2540de8efce949baca20ee57 | 846 | py | Python | HW/hklearn/model.py | leguiart/Machine-Learning | 2fd3c583fbfd8fc3ee12c9106db7b4dfa29bc253 | [
"MIT"
] | null | null | null | HW/hklearn/model.py | leguiart/Machine-Learning | 2fd3c583fbfd8fc3ee12c9106db7b4dfa29bc253 | [
"MIT"
] | null | null | null | HW/hklearn/model.py | leguiart/Machine-Learning | 2fd3c583fbfd8fc3ee12c9106db7b4dfa29bc253 | [
"MIT"
] | null | null | null | import abc
'''
Interfaz sobre la cual todo modelo implementa.
Todo modelo dentro de la biblioteca hklearn implementa
los siguientes comportamientos:
-fit : Entrena el modelo con un a matriz de ejemplos X y sus respectivas etiquetas y
-predict : El modelo entrenado, predice con base en una entrada X
de ejemplos
'''
class ModelInterface(metaclass=abc.ABCMeta):
@classmethod
def __subclasshook__(cls, subclass):
return (hasattr(subclass, 'fit') and
callable(subclass.fit) and
hasattr(subclass, 'predict') and
callable(subclass.predict))
@ModelInterface.register
class Model:
"""Entrena modelo"""
def fit(self, X, y):
pass
"""Prediccion con base en el modelo entrenado"""
def predict(self, X):
pass | 31.333333 | 89 | 0.640662 | 99 | 846 | 5.434343 | 0.545455 | 0.04461 | 0.063197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.282506 | 846 | 27 | 90 | 31.333333 | 0.886326 | 0.016548 | 0 | 0.142857 | 0 | 0 | 0.023529 | 0 | 0 | 0 | 0 | 0.074074 | 0 | 1 | 0.214286 | false | 0.142857 | 0.071429 | 0.071429 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
c04ff3ada5e9e3495ef3e426dee60d1388e47451 | 62,817 | py | Python | aiotdlib/api/types/update.py | pylakey/pytdlib | a390a298a24a7123f3f3aec9f995dee6d51a478e | [
"MIT"
] | 37 | 2021-05-04T10:41:41.000Z | 2022-03-30T13:48:05.000Z | aiotdlib/api/types/update.py | pylakey/pytdlib | a390a298a24a7123f3f3aec9f995dee6d51a478e | [
"MIT"
] | 13 | 2021-07-17T19:54:51.000Z | 2022-02-26T06:50:00.000Z | aiotdlib/api/types/update.py | pylakey/pytdlib | a390a298a24a7123f3f3aec9f995dee6d51a478e | [
"MIT"
] | 7 | 2021-09-22T21:27:11.000Z | 2022-02-20T02:33:19.000Z | # =============================================================================== #
# #
# This file has been generated automatically!! Do not change this manually! #
# #
# =============================================================================== #
from __future__ import annotations
import typing
from pydantic import Field
from .address import Address
from .authorization_state import AuthorizationState
from .background import Background
from .basic_group import BasicGroup
from .basic_group_full_info import BasicGroupFullInfo
from .call import Call
from .callback_query_payload import CallbackQueryPayload
from .chat import Chat
from .chat_action import ChatAction
from .chat_action_bar import ChatActionBar
from .chat_filter_info import ChatFilterInfo
from .chat_invite_link import ChatInviteLink
from .chat_join_request import ChatJoinRequest
from .chat_join_requests_info import ChatJoinRequestsInfo
from .chat_list import ChatList
from .chat_member import ChatMember
from .chat_nearby import ChatNearby
from .chat_notification_settings import ChatNotificationSettings
from .chat_permissions import ChatPermissions
from .chat_photo_info import ChatPhotoInfo
from .chat_position import ChatPosition
from .chat_theme import ChatTheme
from .chat_type import ChatType
from .connection_state import ConnectionState
from .draft_message import DraftMessage
from .file import File
from .group_call import GroupCall
from .group_call_participant import GroupCallParticipant
from .language_pack_string import LanguagePackString
from .location import Location
from .message import Message
from .message_content import MessageContent
from .message_interaction_info import MessageInteractionInfo
from .message_sender import MessageSender
from .notification import Notification
from .notification_group import NotificationGroup
from .notification_group_type import NotificationGroupType
from .notification_settings_scope import NotificationSettingsScope
from .option_value import OptionValue
from .order_info import OrderInfo
from .poll import Poll
from .reply_markup import ReplyMarkup
from .scope_notification_settings import ScopeNotificationSettings
from .secret_chat import SecretChat
from .sticker import Sticker
from .sticker_set import StickerSet
from .sticker_sets import StickerSets
from .suggested_action import SuggestedAction
from .supergroup import Supergroup
from .supergroup_full_info import SupergroupFullInfo
from .terms_of_service import TermsOfService
from .user import User
from .user_full_info import UserFullInfo
from .user_privacy_setting import UserPrivacySetting
from .user_privacy_setting_rules import UserPrivacySettingRules
from .user_status import UserStatus
from .video_chat import VideoChat
from ..base_object import BaseObject
class Update(BaseObject):
"""
Contains notifications about data changes
"""
ID: str = Field("update", alias="@type")
class UpdateActiveNotifications(Update):
"""
Contains active notifications that was shown on previous application launches. This update is sent only if the message database is used. In that case it comes once before any updateNotification and updateNotificationGroup update
:param groups: Lists of active notification groups
:type groups: :class:`list[NotificationGroup]`
"""
ID: str = Field("updateActiveNotifications", alias="@type")
groups: list[NotificationGroup]
@staticmethod
def read(q: dict) -> UpdateActiveNotifications:
return UpdateActiveNotifications.construct(**q)
class UpdateAnimatedEmojiMessageClicked(Update):
"""
Some animated emoji message was clicked and a big animated sticker must be played if the message is visible on the screen. chatActionWatchingAnimations with the text of the message needs to be sent if the sticker is played
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_id: Message identifier
:type message_id: :class:`int`
:param sticker: The animated sticker to be played
:type sticker: :class:`Sticker`
"""
ID: str = Field("updateAnimatedEmojiMessageClicked", alias="@type")
chat_id: int
message_id: int
sticker: Sticker
@staticmethod
def read(q: dict) -> UpdateAnimatedEmojiMessageClicked:
return UpdateAnimatedEmojiMessageClicked.construct(**q)
class UpdateAnimationSearchParameters(Update):
"""
The parameters of animation search through GetOption("animation_search_bot_username") bot has changed
:param provider: Name of the animation search provider
:type provider: :class:`str`
:param emojis: The new list of emojis suggested for searching
:type emojis: :class:`list[str]`
"""
ID: str = Field("updateAnimationSearchParameters", alias="@type")
provider: str
emojis: list[str]
@staticmethod
def read(q: dict) -> UpdateAnimationSearchParameters:
return UpdateAnimationSearchParameters.construct(**q)
class UpdateAuthorizationState(Update):
"""
The user authorization state has changed
:param authorization_state: New authorization state
:type authorization_state: :class:`AuthorizationState`
"""
ID: str = Field("updateAuthorizationState", alias="@type")
authorization_state: AuthorizationState
@staticmethod
def read(q: dict) -> UpdateAuthorizationState:
return UpdateAuthorizationState.construct(**q)
class UpdateBasicGroup(Update):
"""
Some data of a basic group has changed. This update is guaranteed to come before the basic group identifier is returned to the application
:param basic_group: New data about the group
:type basic_group: :class:`BasicGroup`
"""
ID: str = Field("updateBasicGroup", alias="@type")
basic_group: BasicGroup
@staticmethod
def read(q: dict) -> UpdateBasicGroup:
return UpdateBasicGroup.construct(**q)
class UpdateBasicGroupFullInfo(Update):
"""
Some data in basicGroupFullInfo has been changed
:param basic_group_id: Identifier of a basic group
:type basic_group_id: :class:`int`
:param basic_group_full_info: New full information about the group
:type basic_group_full_info: :class:`BasicGroupFullInfo`
"""
ID: str = Field("updateBasicGroupFullInfo", alias="@type")
basic_group_id: int
basic_group_full_info: BasicGroupFullInfo
@staticmethod
def read(q: dict) -> UpdateBasicGroupFullInfo:
return UpdateBasicGroupFullInfo.construct(**q)
class UpdateCall(Update):
"""
New call was created or information about a call was updated
:param call: New data about a call
:type call: :class:`Call`
"""
ID: str = Field("updateCall", alias="@type")
call: Call
@staticmethod
def read(q: dict) -> UpdateCall:
return UpdateCall.construct(**q)
class UpdateChatAction(Update):
"""
A message sender activity in the chat has changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_thread_id: If not 0, a message thread identifier in which the action was performed
:type message_thread_id: :class:`int`
:param sender_id: Identifier of a message sender performing the action
:type sender_id: :class:`MessageSender`
:param action: The action
:type action: :class:`ChatAction`
"""
ID: str = Field("updateChatAction", alias="@type")
chat_id: int
message_thread_id: int
sender_id: MessageSender
action: ChatAction
@staticmethod
def read(q: dict) -> UpdateChatAction:
return UpdateChatAction.construct(**q)
class UpdateChatActionBar(Update):
"""
The chat action bar was changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param action_bar: The new value of the action bar; may be null, defaults to None
:type action_bar: :class:`ChatActionBar`, optional
"""
ID: str = Field("updateChatActionBar", alias="@type")
chat_id: int
action_bar: typing.Optional[ChatActionBar] = None
@staticmethod
def read(q: dict) -> UpdateChatActionBar:
return UpdateChatActionBar.construct(**q)
class UpdateChatDefaultDisableNotification(Update):
"""
The value of the default disable_notification parameter, used when a message is sent to the chat, was changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param default_disable_notification: The new default_disable_notification value
:type default_disable_notification: :class:`bool`
"""
ID: str = Field("updateChatDefaultDisableNotification", alias="@type")
chat_id: int
default_disable_notification: bool
@staticmethod
def read(q: dict) -> UpdateChatDefaultDisableNotification:
return UpdateChatDefaultDisableNotification.construct(**q)
class UpdateChatDraftMessage(Update):
"""
A chat draft has changed. Be aware that the update may come in the currently opened chat but with old content of the draft. If the user has changed the content of the draft, this update mustn't be applied
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param draft_message: The new draft message; may be null, defaults to None
:type draft_message: :class:`DraftMessage`, optional
:param positions: The new chat positions in the chat lists
:type positions: :class:`list[ChatPosition]`
"""
ID: str = Field("updateChatDraftMessage", alias="@type")
chat_id: int
draft_message: typing.Optional[DraftMessage] = None
positions: list[ChatPosition]
@staticmethod
def read(q: dict) -> UpdateChatDraftMessage:
return UpdateChatDraftMessage.construct(**q)
class UpdateChatFilters(Update):
"""
The list of chat filters or a chat filter has changed
:param chat_filters: The new list of chat filters
:type chat_filters: :class:`list[ChatFilterInfo]`
"""
ID: str = Field("updateChatFilters", alias="@type")
chat_filters: list[ChatFilterInfo]
@staticmethod
def read(q: dict) -> UpdateChatFilters:
return UpdateChatFilters.construct(**q)
class UpdateChatHasProtectedContent(Update):
"""
A chat content was allowed or restricted for saving
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param has_protected_content: New value of has_protected_content
:type has_protected_content: :class:`bool`
"""
ID: str = Field("updateChatHasProtectedContent", alias="@type")
chat_id: int
has_protected_content: bool
@staticmethod
def read(q: dict) -> UpdateChatHasProtectedContent:
return UpdateChatHasProtectedContent.construct(**q)
class UpdateChatHasScheduledMessages(Update):
"""
A chat's has_scheduled_messages field has changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param has_scheduled_messages: New value of has_scheduled_messages
:type has_scheduled_messages: :class:`bool`
"""
ID: str = Field("updateChatHasScheduledMessages", alias="@type")
chat_id: int
has_scheduled_messages: bool
@staticmethod
def read(q: dict) -> UpdateChatHasScheduledMessages:
return UpdateChatHasScheduledMessages.construct(**q)
class UpdateChatIsBlocked(Update):
"""
A chat was blocked or unblocked
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param is_blocked: New value of is_blocked
:type is_blocked: :class:`bool`
"""
ID: str = Field("updateChatIsBlocked", alias="@type")
chat_id: int
is_blocked: bool
@staticmethod
def read(q: dict) -> UpdateChatIsBlocked:
return UpdateChatIsBlocked.construct(**q)
class UpdateChatIsMarkedAsUnread(Update):
"""
A chat was marked as unread or was read
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param is_marked_as_unread: New value of is_marked_as_unread
:type is_marked_as_unread: :class:`bool`
"""
ID: str = Field("updateChatIsMarkedAsUnread", alias="@type")
chat_id: int
is_marked_as_unread: bool
@staticmethod
def read(q: dict) -> UpdateChatIsMarkedAsUnread:
return UpdateChatIsMarkedAsUnread.construct(**q)
class UpdateChatLastMessage(Update):
"""
The last message of a chat was changed. If last_message is null, then the last message in the chat became unknown. Some new unknown messages might be added to the chat in this case
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param last_message: The new last message in the chat; may be null, defaults to None
:type last_message: :class:`Message`, optional
:param positions: The new chat positions in the chat lists
:type positions: :class:`list[ChatPosition]`
"""
ID: str = Field("updateChatLastMessage", alias="@type")
chat_id: int
last_message: typing.Optional[Message] = None
positions: list[ChatPosition]
@staticmethod
def read(q: dict) -> UpdateChatLastMessage:
return UpdateChatLastMessage.construct(**q)
class UpdateChatMember(Update):
"""
User rights changed in a chat; for bots only
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param actor_user_id: Identifier of the user, changing the rights
:type actor_user_id: :class:`int`
:param date: Point in time (Unix timestamp) when the user rights was changed
:type date: :class:`int`
:param invite_link: If user has joined the chat using an invite link, the invite link; may be null, defaults to None
:type invite_link: :class:`ChatInviteLink`, optional
:param old_chat_member: Previous chat member
:type old_chat_member: :class:`ChatMember`
:param new_chat_member: New chat member
:type new_chat_member: :class:`ChatMember`
"""
ID: str = Field("updateChatMember", alias="@type")
chat_id: int
actor_user_id: int
date: int
invite_link: typing.Optional[ChatInviteLink] = None
old_chat_member: ChatMember
new_chat_member: ChatMember
@staticmethod
def read(q: dict) -> UpdateChatMember:
return UpdateChatMember.construct(**q)
class UpdateChatMessageSender(Update):
"""
The message sender that is selected to send messages in a chat has changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_sender_id: New value of message_sender_id; may be null if the user can't change message sender, defaults to None
:type message_sender_id: :class:`MessageSender`, optional
"""
ID: str = Field("updateChatMessageSender", alias="@type")
chat_id: int
message_sender_id: typing.Optional[MessageSender] = None
@staticmethod
def read(q: dict) -> UpdateChatMessageSender:
return UpdateChatMessageSender.construct(**q)
class UpdateChatMessageTtl(Update):
"""
The message Time To Live setting for a chat was changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_ttl: New value of message_ttl
:type message_ttl: :class:`int`
"""
ID: str = Field("updateChatMessageTtl", alias="@type")
chat_id: int
message_ttl: int
@staticmethod
def read(q: dict) -> UpdateChatMessageTtl:
return UpdateChatMessageTtl.construct(**q)
class UpdateChatNotificationSettings(Update):
"""
Notification settings for a chat were changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param notification_settings: The new notification settings
:type notification_settings: :class:`ChatNotificationSettings`
"""
ID: str = Field("updateChatNotificationSettings", alias="@type")
chat_id: int
notification_settings: ChatNotificationSettings
@staticmethod
def read(q: dict) -> UpdateChatNotificationSettings:
return UpdateChatNotificationSettings.construct(**q)
class UpdateChatOnlineMemberCount(Update):
"""
The number of online group members has changed. This update with non-zero count is sent only for currently opened chats. There is no guarantee that it will be sent just after the count has changed
:param chat_id: Identifier of the chat
:type chat_id: :class:`int`
:param online_member_count: New number of online members in the chat, or 0 if unknown
:type online_member_count: :class:`int`
"""
ID: str = Field("updateChatOnlineMemberCount", alias="@type")
chat_id: int
online_member_count: int
@staticmethod
def read(q: dict) -> UpdateChatOnlineMemberCount:
return UpdateChatOnlineMemberCount.construct(**q)
class UpdateChatPendingJoinRequests(Update):
"""
The chat pending join requests were changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param pending_join_requests: The new data about pending join requests; may be null, defaults to None
:type pending_join_requests: :class:`ChatJoinRequestsInfo`, optional
"""
ID: str = Field("updateChatPendingJoinRequests", alias="@type")
chat_id: int
pending_join_requests: typing.Optional[ChatJoinRequestsInfo] = None
@staticmethod
def read(q: dict) -> UpdateChatPendingJoinRequests:
return UpdateChatPendingJoinRequests.construct(**q)
class UpdateChatPermissions(Update):
"""
Chat permissions was changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param permissions: The new chat permissions
:type permissions: :class:`ChatPermissions`
"""
ID: str = Field("updateChatPermissions", alias="@type")
chat_id: int
permissions: ChatPermissions
@staticmethod
def read(q: dict) -> UpdateChatPermissions:
return UpdateChatPermissions.construct(**q)
class UpdateChatPhoto(Update):
"""
A chat photo was changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param photo: The new chat photo; may be null, defaults to None
:type photo: :class:`ChatPhotoInfo`, optional
"""
ID: str = Field("updateChatPhoto", alias="@type")
chat_id: int
photo: typing.Optional[ChatPhotoInfo] = None
@staticmethod
def read(q: dict) -> UpdateChatPhoto:
return UpdateChatPhoto.construct(**q)
class UpdateChatPosition(Update):
"""
The position of a chat in a chat list has changed. Instead of this update updateChatLastMessage or updateChatDraftMessage might be sent
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param position: New chat position. If new order is 0, then the chat needs to be removed from the list
:type position: :class:`ChatPosition`
"""
ID: str = Field("updateChatPosition", alias="@type")
chat_id: int
position: ChatPosition
@staticmethod
def read(q: dict) -> UpdateChatPosition:
return UpdateChatPosition.construct(**q)
class UpdateChatReadInbox(Update):
"""
Incoming messages were read or the number of unread messages has been changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param last_read_inbox_message_id: Identifier of the last read incoming message
:type last_read_inbox_message_id: :class:`int`
:param unread_count: The number of unread messages left in the chat
:type unread_count: :class:`int`
"""
ID: str = Field("updateChatReadInbox", alias="@type")
chat_id: int
last_read_inbox_message_id: int
unread_count: int
@staticmethod
def read(q: dict) -> UpdateChatReadInbox:
return UpdateChatReadInbox.construct(**q)
class UpdateChatReadOutbox(Update):
"""
Outgoing messages were read
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param last_read_outbox_message_id: Identifier of last read outgoing message
:type last_read_outbox_message_id: :class:`int`
"""
ID: str = Field("updateChatReadOutbox", alias="@type")
chat_id: int
last_read_outbox_message_id: int
@staticmethod
def read(q: dict) -> UpdateChatReadOutbox:
return UpdateChatReadOutbox.construct(**q)
class UpdateChatReplyMarkup(Update):
"""
The default chat reply markup was changed. Can occur because new messages with reply markup were received or because an old reply markup was hidden by the user
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param reply_markup_message_id: Identifier of the message from which reply markup needs to be used; 0 if there is no default custom reply markup in the chat
:type reply_markup_message_id: :class:`int`
"""
ID: str = Field("updateChatReplyMarkup", alias="@type")
chat_id: int
reply_markup_message_id: int
@staticmethod
def read(q: dict) -> UpdateChatReplyMarkup:
return UpdateChatReplyMarkup.construct(**q)
class UpdateChatTheme(Update):
"""
The chat theme was changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param theme_name: The new name of the chat theme; may be empty if theme was reset to default
:type theme_name: :class:`str`
"""
ID: str = Field("updateChatTheme", alias="@type")
chat_id: int
theme_name: str
@staticmethod
def read(q: dict) -> UpdateChatTheme:
return UpdateChatTheme.construct(**q)
class UpdateChatThemes(Update):
"""
The list of available chat themes has changed
:param chat_themes: The new list of chat themes
:type chat_themes: :class:`list[ChatTheme]`
"""
ID: str = Field("updateChatThemes", alias="@type")
chat_themes: list[ChatTheme]
@staticmethod
def read(q: dict) -> UpdateChatThemes:
return UpdateChatThemes.construct(**q)
class UpdateChatTitle(Update):
"""
The title of a chat was changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param title: The new chat title
:type title: :class:`str`
"""
ID: str = Field("updateChatTitle", alias="@type")
chat_id: int
title: str
@staticmethod
def read(q: dict) -> UpdateChatTitle:
return UpdateChatTitle.construct(**q)
class UpdateChatUnreadMentionCount(Update):
"""
The chat unread_mention_count has changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param unread_mention_count: The number of unread mention messages left in the chat
:type unread_mention_count: :class:`int`
"""
ID: str = Field("updateChatUnreadMentionCount", alias="@type")
chat_id: int
unread_mention_count: int
@staticmethod
def read(q: dict) -> UpdateChatUnreadMentionCount:
return UpdateChatUnreadMentionCount.construct(**q)
class UpdateChatVideoChat(Update):
"""
A chat video chat state has changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param video_chat: New value of video_chat
:type video_chat: :class:`VideoChat`
"""
ID: str = Field("updateChatVideoChat", alias="@type")
chat_id: int
video_chat: VideoChat
@staticmethod
def read(q: dict) -> UpdateChatVideoChat:
return UpdateChatVideoChat.construct(**q)
class UpdateConnectionState(Update):
"""
The connection state has changed. This update must be used only to show a human-readable description of the connection state
:param state: The new connection state
:type state: :class:`ConnectionState`
"""
ID: str = Field("updateConnectionState", alias="@type")
state: ConnectionState
@staticmethod
def read(q: dict) -> UpdateConnectionState:
return UpdateConnectionState.construct(**q)
class UpdateDeleteMessages(Update):
"""
Some messages were deleted
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_ids: Identifiers of the deleted messages
:type message_ids: :class:`list[int]`
:param is_permanent: True, if the messages are permanently deleted by a user (as opposed to just becoming inaccessible)
:type is_permanent: :class:`bool`
:param from_cache: True, if the messages are deleted only from the cache and can possibly be retrieved again in the future
:type from_cache: :class:`bool`
"""
ID: str = Field("updateDeleteMessages", alias="@type")
chat_id: int
message_ids: list[int]
is_permanent: bool
from_cache: bool
@staticmethod
def read(q: dict) -> UpdateDeleteMessages:
return UpdateDeleteMessages.construct(**q)
class UpdateDiceEmojis(Update):
"""
The list of supported dice emojis has changed
:param emojis: The new list of supported dice emojis
:type emojis: :class:`list[str]`
"""
ID: str = Field("updateDiceEmojis", alias="@type")
emojis: list[str]
@staticmethod
def read(q: dict) -> UpdateDiceEmojis:
return UpdateDiceEmojis.construct(**q)
class UpdateFavoriteStickers(Update):
"""
The list of favorite stickers was updated
:param sticker_ids: The new list of file identifiers of favorite stickers
:type sticker_ids: :class:`list[int]`
"""
ID: str = Field("updateFavoriteStickers", alias="@type")
sticker_ids: list[int]
@staticmethod
def read(q: dict) -> UpdateFavoriteStickers:
return UpdateFavoriteStickers.construct(**q)
class UpdateFile(Update):
"""
Information about a file was updated
:param file: New data about the file
:type file: :class:`File`
"""
ID: str = Field("updateFile", alias="@type")
file: File
@staticmethod
def read(q: dict) -> UpdateFile:
return UpdateFile.construct(**q)
class UpdateFileGenerationStart(Update):
"""
The file generation process needs to be started by the application
:param generation_id: Unique identifier for the generation process
:type generation_id: :class:`int`
:param original_path: The path to a file from which a new file is generated; may be empty
:type original_path: :class:`str`
:param destination_path: The path to a file that must be created and where the new file is generated
:type destination_path: :class:`str`
:param conversion: String specifying the conversion applied to the original file. If conversion is "#url#" than original_path contains an HTTP/HTTPS URL of a file, which must be downloaded by the application
:type conversion: :class:`str`
"""
ID: str = Field("updateFileGenerationStart", alias="@type")
generation_id: int
original_path: str
destination_path: str
conversion: str
@staticmethod
def read(q: dict) -> UpdateFileGenerationStart:
return UpdateFileGenerationStart.construct(**q)
class UpdateFileGenerationStop(Update):
"""
File generation is no longer needed
:param generation_id: Unique identifier for the generation process
:type generation_id: :class:`int`
"""
ID: str = Field("updateFileGenerationStop", alias="@type")
generation_id: int
@staticmethod
def read(q: dict) -> UpdateFileGenerationStop:
return UpdateFileGenerationStop.construct(**q)
class UpdateGroupCall(Update):
"""
Information about a group call was updated
:param group_call: New data about a group call
:type group_call: :class:`GroupCall`
"""
ID: str = Field("updateGroupCall", alias="@type")
group_call: GroupCall
@staticmethod
def read(q: dict) -> UpdateGroupCall:
return UpdateGroupCall.construct(**q)
class UpdateGroupCallParticipant(Update):
"""
Information about a group call participant was changed. The updates are sent only after the group call is received through getGroupCall and only if the call is joined or being joined
:param group_call_id: Identifier of group call
:type group_call_id: :class:`int`
:param participant: New data about a participant
:type participant: :class:`GroupCallParticipant`
"""
ID: str = Field("updateGroupCallParticipant", alias="@type")
group_call_id: int
participant: GroupCallParticipant
@staticmethod
def read(q: dict) -> UpdateGroupCallParticipant:
return UpdateGroupCallParticipant.construct(**q)
class UpdateHavePendingNotifications(Update):
"""
Describes whether there are some pending notification updates. Can be used to prevent application from killing, while there are some pending notifications
:param have_delayed_notifications: True, if there are some delayed notification updates, which will be sent soon
:type have_delayed_notifications: :class:`bool`
:param have_unreceived_notifications: True, if there can be some yet unreceived notifications, which are being fetched from the server
:type have_unreceived_notifications: :class:`bool`
"""
ID: str = Field("updateHavePendingNotifications", alias="@type")
have_delayed_notifications: bool
have_unreceived_notifications: bool
@staticmethod
def read(q: dict) -> UpdateHavePendingNotifications:
return UpdateHavePendingNotifications.construct(**q)
class UpdateInstalledStickerSets(Update):
"""
The list of installed sticker sets was updated
:param is_masks: True, if the list of installed mask sticker sets was updated
:type is_masks: :class:`bool`
:param sticker_set_ids: The new list of installed ordinary sticker sets
:type sticker_set_ids: :class:`list[int]`
"""
ID: str = Field("updateInstalledStickerSets", alias="@type")
is_masks: bool
sticker_set_ids: list[int]
@staticmethod
def read(q: dict) -> UpdateInstalledStickerSets:
return UpdateInstalledStickerSets.construct(**q)
class UpdateLanguagePackStrings(Update):
"""
Some language pack strings have been updated
:param localization_target: Localization target to which the language pack belongs
:type localization_target: :class:`str`
:param language_pack_id: Identifier of the updated language pack
:type language_pack_id: :class:`str`
:param strings: List of changed language pack strings
:type strings: :class:`list[LanguagePackString]`
"""
ID: str = Field("updateLanguagePackStrings", alias="@type")
localization_target: str
language_pack_id: str
strings: list[LanguagePackString]
@staticmethod
def read(q: dict) -> UpdateLanguagePackStrings:
return UpdateLanguagePackStrings.construct(**q)
class UpdateMessageContent(Update):
"""
The message content has changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_id: Message identifier
:type message_id: :class:`int`
:param new_content: New message content
:type new_content: :class:`MessageContent`
"""
ID: str = Field("updateMessageContent", alias="@type")
chat_id: int
message_id: int
new_content: MessageContent
@staticmethod
def read(q: dict) -> UpdateMessageContent:
return UpdateMessageContent.construct(**q)
class UpdateMessageContentOpened(Update):
"""
The message content was opened. Updates voice note messages to "listened", video note messages to "viewed" and starts the TTL timer for self-destructing messages
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_id: Message identifier
:type message_id: :class:`int`
"""
ID: str = Field("updateMessageContentOpened", alias="@type")
chat_id: int
message_id: int
@staticmethod
def read(q: dict) -> UpdateMessageContentOpened:
return UpdateMessageContentOpened.construct(**q)
class UpdateMessageEdited(Update):
"""
A message was edited. Changes in the message content will come in a separate updateMessageContent
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_id: Message identifier
:type message_id: :class:`int`
:param edit_date: Point in time (Unix timestamp) when the message was edited
:type edit_date: :class:`int`
:param reply_markup: New message reply markup; may be null, defaults to None
:type reply_markup: :class:`ReplyMarkup`, optional
"""
ID: str = Field("updateMessageEdited", alias="@type")
chat_id: int
message_id: int
edit_date: int
reply_markup: typing.Optional[ReplyMarkup] = None
@staticmethod
def read(q: dict) -> UpdateMessageEdited:
return UpdateMessageEdited.construct(**q)
class UpdateMessageInteractionInfo(Update):
"""
The information about interactions with a message has changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_id: Message identifier
:type message_id: :class:`int`
:param interaction_info: New information about interactions with the message; may be null, defaults to None
:type interaction_info: :class:`MessageInteractionInfo`, optional
"""
ID: str = Field("updateMessageInteractionInfo", alias="@type")
chat_id: int
message_id: int
interaction_info: typing.Optional[MessageInteractionInfo] = None
@staticmethod
def read(q: dict) -> UpdateMessageInteractionInfo:
return UpdateMessageInteractionInfo.construct(**q)
class UpdateMessageIsPinned(Update):
"""
The message pinned state was changed
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_id: The message identifier
:type message_id: :class:`int`
:param is_pinned: True, if the message is pinned
:type is_pinned: :class:`bool`
"""
ID: str = Field("updateMessageIsPinned", alias="@type")
chat_id: int
message_id: int
is_pinned: bool
@staticmethod
def read(q: dict) -> UpdateMessageIsPinned:
return UpdateMessageIsPinned.construct(**q)
class UpdateMessageLiveLocationViewed(Update):
"""
A message with a live location was viewed. When the update is received, the application is supposed to update the live location
:param chat_id: Identifier of the chat with the live location message
:type chat_id: :class:`int`
:param message_id: Identifier of the message with live location
:type message_id: :class:`int`
"""
ID: str = Field("updateMessageLiveLocationViewed", alias="@type")
chat_id: int
message_id: int
@staticmethod
def read(q: dict) -> UpdateMessageLiveLocationViewed:
return UpdateMessageLiveLocationViewed.construct(**q)
class UpdateMessageMentionRead(Update):
"""
A message with an unread mention was read
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param message_id: Message identifier
:type message_id: :class:`int`
:param unread_mention_count: The new number of unread mention messages left in the chat
:type unread_mention_count: :class:`int`
"""
ID: str = Field("updateMessageMentionRead", alias="@type")
chat_id: int
message_id: int
unread_mention_count: int
@staticmethod
def read(q: dict) -> UpdateMessageMentionRead:
return UpdateMessageMentionRead.construct(**q)
class UpdateMessageSendAcknowledged(Update):
"""
A request to send a message has reached the Telegram server. This doesn't mean that the message will be sent successfully or even that the send message request will be processed. This update will be sent only if the option "use_quick_ack" is set to true. This update may be sent multiple times for the same message
:param chat_id: The chat identifier of the sent message
:type chat_id: :class:`int`
:param message_id: A temporary message identifier
:type message_id: :class:`int`
"""
ID: str = Field("updateMessageSendAcknowledged", alias="@type")
chat_id: int
message_id: int
@staticmethod
def read(q: dict) -> UpdateMessageSendAcknowledged:
return UpdateMessageSendAcknowledged.construct(**q)
class UpdateMessageSendFailed(Update):
"""
A message failed to send. Be aware that some messages being sent can be irrecoverably deleted, in which case updateDeleteMessages will be received instead of this update
:param message: The failed to send message
:type message: :class:`Message`
:param old_message_id: The previous temporary message identifier
:type old_message_id: :class:`int`
:param error_code: An error code
:type error_code: :class:`int`
:param error_message: Error message
:type error_message: :class:`str`
"""
ID: str = Field("updateMessageSendFailed", alias="@type")
message: Message
old_message_id: int
error_code: int
error_message: str
@staticmethod
def read(q: dict) -> UpdateMessageSendFailed:
return UpdateMessageSendFailed.construct(**q)
class UpdateMessageSendSucceeded(Update):
"""
A message has been successfully sent
:param message: The sent message. Usually only the message identifier, date, and content are changed, but almost all other fields can also change
:type message: :class:`Message`
:param old_message_id: The previous temporary message identifier
:type old_message_id: :class:`int`
"""
ID: str = Field("updateMessageSendSucceeded", alias="@type")
message: Message
old_message_id: int
@staticmethod
def read(q: dict) -> UpdateMessageSendSucceeded:
return UpdateMessageSendSucceeded.construct(**q)
class UpdateNewCallSignalingData(Update):
"""
New call signaling data arrived
:param call_id: The call identifier
:type call_id: :class:`int`
:param data: The data
:type data: :class:`str`
"""
ID: str = Field("updateNewCallSignalingData", alias="@type")
call_id: int
data: str
@staticmethod
def read(q: dict) -> UpdateNewCallSignalingData:
return UpdateNewCallSignalingData.construct(**q)
class UpdateNewCallbackQuery(Update):
"""
A new incoming callback query; for bots only
:param id: Unique query identifier
:type id: :class:`int`
:param sender_user_id: Identifier of the user who sent the query
:type sender_user_id: :class:`int`
:param chat_id: Identifier of the chat where the query was sent
:type chat_id: :class:`int`
:param message_id: Identifier of the message, from which the query originated
:type message_id: :class:`int`
:param chat_instance: Identifier that uniquely corresponds to the chat to which the message was sent
:type chat_instance: :class:`int`
:param payload: Query payload
:type payload: :class:`CallbackQueryPayload`
"""
ID: str = Field("updateNewCallbackQuery", alias="@type")
id: int
sender_user_id: int
chat_id: int
message_id: int
chat_instance: int
payload: CallbackQueryPayload
@staticmethod
def read(q: dict) -> UpdateNewCallbackQuery:
return UpdateNewCallbackQuery.construct(**q)
class UpdateNewChat(Update):
"""
A new chat has been loaded/created. This update is guaranteed to come before the chat identifier is returned to the application. The chat field changes will be reported through separate updates
:param chat: The chat
:type chat: :class:`Chat`
"""
ID: str = Field("updateNewChat", alias="@type")
chat: Chat
@staticmethod
def read(q: dict) -> UpdateNewChat:
return UpdateNewChat.construct(**q)
class UpdateNewChatJoinRequest(Update):
"""
A user sent a join request to a chat; for bots only
:param chat_id: Chat identifier
:type chat_id: :class:`int`
:param request: Join request
:type request: :class:`ChatJoinRequest`
:param invite_link: The invite link, which was used to send join request; may be null, defaults to None
:type invite_link: :class:`ChatInviteLink`, optional
"""
ID: str = Field("updateNewChatJoinRequest", alias="@type")
chat_id: int
request: ChatJoinRequest
invite_link: typing.Optional[ChatInviteLink] = None
@staticmethod
def read(q: dict) -> UpdateNewChatJoinRequest:
return UpdateNewChatJoinRequest.construct(**q)
class UpdateNewChosenInlineResult(Update):
"""
The user has chosen a result of an inline query; for bots only
:param sender_user_id: Identifier of the user who sent the query
:type sender_user_id: :class:`int`
:param user_location: User location; may be null, defaults to None
:type user_location: :class:`Location`, optional
:param query: Text of the query
:type query: :class:`str`
:param result_id: Identifier of the chosen result
:type result_id: :class:`str`
:param inline_message_id: Identifier of the sent inline message, if known
:type inline_message_id: :class:`str`
"""
ID: str = Field("updateNewChosenInlineResult", alias="@type")
sender_user_id: int
user_location: typing.Optional[Location] = None
query: str
result_id: str
inline_message_id: str
@staticmethod
def read(q: dict) -> UpdateNewChosenInlineResult:
return UpdateNewChosenInlineResult.construct(**q)
class UpdateNewCustomEvent(Update):
"""
A new incoming event; for bots only
:param event: A JSON-serialized event
:type event: :class:`str`
"""
ID: str = Field("updateNewCustomEvent", alias="@type")
event: str
@staticmethod
def read(q: dict) -> UpdateNewCustomEvent:
return UpdateNewCustomEvent.construct(**q)
class UpdateNewCustomQuery(Update):
"""
A new incoming query; for bots only
:param id: The query identifier
:type id: :class:`int`
:param data: JSON-serialized query data
:type data: :class:`str`
:param timeout: Query timeout
:type timeout: :class:`int`
"""
ID: str = Field("updateNewCustomQuery", alias="@type")
id: int
data: str
timeout: int
@staticmethod
def read(q: dict) -> UpdateNewCustomQuery:
return UpdateNewCustomQuery.construct(**q)
class UpdateNewInlineCallbackQuery(Update):
"""
A new incoming callback query from a message sent via a bot; for bots only
:param id: Unique query identifier
:type id: :class:`int`
:param sender_user_id: Identifier of the user who sent the query
:type sender_user_id: :class:`int`
:param inline_message_id: Identifier of the inline message, from which the query originated
:type inline_message_id: :class:`str`
:param chat_instance: An identifier uniquely corresponding to the chat a message was sent to
:type chat_instance: :class:`int`
:param payload: Query payload
:type payload: :class:`CallbackQueryPayload`
"""
ID: str = Field("updateNewInlineCallbackQuery", alias="@type")
id: int
sender_user_id: int
inline_message_id: str
chat_instance: int
payload: CallbackQueryPayload
@staticmethod
def read(q: dict) -> UpdateNewInlineCallbackQuery:
return UpdateNewInlineCallbackQuery.construct(**q)
class UpdateNewInlineQuery(Update):
"""
A new incoming inline query; for bots only
:param id: Unique query identifier
:type id: :class:`int`
:param sender_user_id: Identifier of the user who sent the query
:type sender_user_id: :class:`int`
:param user_location: User location; may be null, defaults to None
:type user_location: :class:`Location`, optional
:param chat_type: The type of the chat, from which the query originated; may be null if unknown, defaults to None
:type chat_type: :class:`ChatType`, optional
:param query: Text of the query
:type query: :class:`str`
:param offset: Offset of the first entry to return
:type offset: :class:`str`
"""
ID: str = Field("updateNewInlineQuery", alias="@type")
id: int
sender_user_id: int
user_location: typing.Optional[Location] = None
chat_type: typing.Optional[ChatType] = None
query: str
offset: str
@staticmethod
def read(q: dict) -> UpdateNewInlineQuery:
return UpdateNewInlineQuery.construct(**q)
class UpdateNewMessage(Update):
"""
A new message was received; can also be an outgoing message
:param message: The new message
:type message: :class:`Message`
"""
ID: str = Field("updateNewMessage", alias="@type")
message: Message
@staticmethod
def read(q: dict) -> UpdateNewMessage:
return UpdateNewMessage.construct(**q)
class UpdateNewPreCheckoutQuery(Update):
"""
A new incoming pre-checkout query; for bots only. Contains full information about a checkout
:param id: Unique query identifier
:type id: :class:`int`
:param sender_user_id: Identifier of the user who sent the query
:type sender_user_id: :class:`int`
:param currency: Currency for the product price
:type currency: :class:`str`
:param total_amount: Total price for the product, in the smallest units of the currency
:type total_amount: :class:`int`
:param invoice_payload: Invoice payload
:type invoice_payload: :class:`str`
:param shipping_option_id: Identifier of a shipping option chosen by the user; may be empty if not applicable
:type shipping_option_id: :class:`str`
:param order_info: Information about the order; may be null, defaults to None
:type order_info: :class:`OrderInfo`, optional
"""
ID: str = Field("updateNewPreCheckoutQuery", alias="@type")
id: int
sender_user_id: int
currency: str
total_amount: int
invoice_payload: str
shipping_option_id: str
order_info: typing.Optional[OrderInfo] = None
@staticmethod
def read(q: dict) -> UpdateNewPreCheckoutQuery:
return UpdateNewPreCheckoutQuery.construct(**q)
class UpdateNewShippingQuery(Update):
"""
A new incoming shipping query; for bots only. Only for invoices with flexible price
:param id: Unique query identifier
:type id: :class:`int`
:param sender_user_id: Identifier of the user who sent the query
:type sender_user_id: :class:`int`
:param invoice_payload: Invoice payload
:type invoice_payload: :class:`str`
:param shipping_address: User shipping address
:type shipping_address: :class:`Address`
"""
ID: str = Field("updateNewShippingQuery", alias="@type")
id: int
sender_user_id: int
invoice_payload: str
shipping_address: Address
@staticmethod
def read(q: dict) -> UpdateNewShippingQuery:
return UpdateNewShippingQuery.construct(**q)
class UpdateNotification(Update):
"""
A notification was changed
:param notification_group_id: Unique notification group identifier
:type notification_group_id: :class:`int`
:param notification: Changed notification
:type notification: :class:`Notification`
"""
ID: str = Field("updateNotification", alias="@type")
notification_group_id: int
notification: Notification
@staticmethod
def read(q: dict) -> UpdateNotification:
return UpdateNotification.construct(**q)
class UpdateNotificationGroup(Update):
"""
A list of active notifications in a notification group has changed
:param notification_group_id: Unique notification group identifier
:type notification_group_id: :class:`int`
:param type_: New type of the notification group
:type type_: :class:`NotificationGroupType`
:param chat_id: Identifier of a chat to which all notifications in the group belong
:type chat_id: :class:`int`
:param notification_settings_chat_id: Chat identifier, which notification settings must be applied to the added notifications
:type notification_settings_chat_id: :class:`int`
:param is_silent: True, if the notifications must be shown without sound
:type is_silent: :class:`bool`
:param total_count: Total number of unread notifications in the group, can be bigger than number of active notifications
:type total_count: :class:`int`
:param added_notifications: List of added group notifications, sorted by notification ID
:type added_notifications: :class:`list[Notification]`
:param removed_notification_ids: Identifiers of removed group notifications, sorted by notification ID
:type removed_notification_ids: :class:`list[int]`
"""
ID: str = Field("updateNotificationGroup", alias="@type")
notification_group_id: int
type_: NotificationGroupType = Field(..., alias='type')
chat_id: int
notification_settings_chat_id: int
is_silent: bool
total_count: int
added_notifications: list[Notification]
removed_notification_ids: list[int]
@staticmethod
def read(q: dict) -> UpdateNotificationGroup:
return UpdateNotificationGroup.construct(**q)
class UpdateOption(Update):
"""
An option changed its value
:param name: The option name
:type name: :class:`str`
:param value: The new option value
:type value: :class:`OptionValue`
"""
ID: str = Field("updateOption", alias="@type")
name: str
value: OptionValue
@staticmethod
def read(q: dict) -> UpdateOption:
return UpdateOption.construct(**q)
class UpdatePoll(Update):
"""
A poll was updated; for bots only
:param poll: New data about the poll
:type poll: :class:`Poll`
"""
ID: str = Field("updatePoll", alias="@type")
poll: Poll
@staticmethod
def read(q: dict) -> UpdatePoll:
return UpdatePoll.construct(**q)
class UpdatePollAnswer(Update):
"""
A user changed the answer to a poll; for bots only
:param poll_id: Unique poll identifier
:type poll_id: :class:`int`
:param user_id: The user, who changed the answer to the poll
:type user_id: :class:`int`
:param option_ids: 0-based identifiers of answer options, chosen by the user
:type option_ids: :class:`list[int]`
"""
ID: str = Field("updatePollAnswer", alias="@type")
poll_id: int
user_id: int
option_ids: list[int]
@staticmethod
def read(q: dict) -> UpdatePollAnswer:
return UpdatePollAnswer.construct(**q)
class UpdateRecentStickers(Update):
"""
The list of recently used stickers was updated
:param is_attached: True, if the list of stickers attached to photo or video files was updated, otherwise the list of sent stickers is updated
:type is_attached: :class:`bool`
:param sticker_ids: The new list of file identifiers of recently used stickers
:type sticker_ids: :class:`list[int]`
"""
ID: str = Field("updateRecentStickers", alias="@type")
is_attached: bool
sticker_ids: list[int]
@staticmethod
def read(q: dict) -> UpdateRecentStickers:
return UpdateRecentStickers.construct(**q)
class UpdateSavedAnimations(Update):
"""
The list of saved animations was updated
:param animation_ids: The new list of file identifiers of saved animations
:type animation_ids: :class:`list[int]`
"""
ID: str = Field("updateSavedAnimations", alias="@type")
animation_ids: list[int]
@staticmethod
def read(q: dict) -> UpdateSavedAnimations:
return UpdateSavedAnimations.construct(**q)
class UpdateScopeNotificationSettings(Update):
"""
Notification settings for some type of chats were updated
:param scope: Types of chats for which notification settings were updated
:type scope: :class:`NotificationSettingsScope`
:param notification_settings: The new notification settings
:type notification_settings: :class:`ScopeNotificationSettings`
"""
ID: str = Field("updateScopeNotificationSettings", alias="@type")
scope: NotificationSettingsScope
notification_settings: ScopeNotificationSettings
@staticmethod
def read(q: dict) -> UpdateScopeNotificationSettings:
return UpdateScopeNotificationSettings.construct(**q)
class UpdateSecretChat(Update):
"""
Some data of a secret chat has changed. This update is guaranteed to come before the secret chat identifier is returned to the application
:param secret_chat: New data about the secret chat
:type secret_chat: :class:`SecretChat`
"""
ID: str = Field("updateSecretChat", alias="@type")
secret_chat: SecretChat
@staticmethod
def read(q: dict) -> UpdateSecretChat:
return UpdateSecretChat.construct(**q)
class UpdateSelectedBackground(Update):
"""
The selected background has changed
:param for_dark_theme: True, if background for dark theme has changed
:type for_dark_theme: :class:`bool`
:param background: The new selected background; may be null, defaults to None
:type background: :class:`Background`, optional
"""
ID: str = Field("updateSelectedBackground", alias="@type")
for_dark_theme: bool
background: typing.Optional[Background] = None
@staticmethod
def read(q: dict) -> UpdateSelectedBackground:
return UpdateSelectedBackground.construct(**q)
class UpdateServiceNotification(Update):
"""
A service notification from the server was received. Upon receiving this the application must show a popup with the content of the notification
:param type_: Notification type. If type begins with "AUTH_KEY_DROP_", then two buttons "Cancel" and "Log out" must be shown under notification; if user presses the second, all local data must be destroyed using Destroy method
:type type_: :class:`str`
:param content: Notification content
:type content: :class:`MessageContent`
"""
ID: str = Field("updateServiceNotification", alias="@type")
type_: str = Field(..., alias='type')
content: MessageContent
@staticmethod
def read(q: dict) -> UpdateServiceNotification:
return UpdateServiceNotification.construct(**q)
class UpdateStickerSet(Update):
"""
A sticker set has changed
:param sticker_set: The sticker set
:type sticker_set: :class:`StickerSet`
"""
ID: str = Field("updateStickerSet", alias="@type")
sticker_set: StickerSet
@staticmethod
def read(q: dict) -> UpdateStickerSet:
return UpdateStickerSet.construct(**q)
class UpdateSuggestedActions(Update):
"""
The list of suggested to the user actions has changed
:param added_actions: Added suggested actions
:type added_actions: :class:`list[SuggestedAction]`
:param removed_actions: Removed suggested actions
:type removed_actions: :class:`list[SuggestedAction]`
"""
ID: str = Field("updateSuggestedActions", alias="@type")
added_actions: list[SuggestedAction]
removed_actions: list[SuggestedAction]
@staticmethod
def read(q: dict) -> UpdateSuggestedActions:
return UpdateSuggestedActions.construct(**q)
class UpdateSupergroup(Update):
"""
Some data of a supergroup or a channel has changed. This update is guaranteed to come before the supergroup identifier is returned to the application
:param supergroup: New data about the supergroup
:type supergroup: :class:`Supergroup`
"""
ID: str = Field("updateSupergroup", alias="@type")
supergroup: Supergroup
@staticmethod
def read(q: dict) -> UpdateSupergroup:
return UpdateSupergroup.construct(**q)
class UpdateSupergroupFullInfo(Update):
"""
Some data in supergroupFullInfo has been changed
:param supergroup_id: Identifier of the supergroup or channel
:type supergroup_id: :class:`int`
:param supergroup_full_info: New full information about the supergroup
:type supergroup_full_info: :class:`SupergroupFullInfo`
"""
ID: str = Field("updateSupergroupFullInfo", alias="@type")
supergroup_id: int
supergroup_full_info: SupergroupFullInfo
@staticmethod
def read(q: dict) -> UpdateSupergroupFullInfo:
return UpdateSupergroupFullInfo.construct(**q)
class UpdateTermsOfService(Update):
"""
New terms of service must be accepted by the user. If the terms of service are declined, then the deleteAccount method must be called with the reason "Decline ToS update"
:param terms_of_service_id: Identifier of the terms of service
:type terms_of_service_id: :class:`str`
:param terms_of_service: The new terms of service
:type terms_of_service: :class:`TermsOfService`
"""
ID: str = Field("updateTermsOfService", alias="@type")
terms_of_service_id: str
terms_of_service: TermsOfService
@staticmethod
def read(q: dict) -> UpdateTermsOfService:
return UpdateTermsOfService.construct(**q)
class UpdateTrendingStickerSets(Update):
"""
The list of trending sticker sets was updated or some of them were viewed
:param sticker_sets: The prefix of the list of trending sticker sets with the newest trending sticker sets
:type sticker_sets: :class:`StickerSets`
"""
ID: str = Field("updateTrendingStickerSets", alias="@type")
sticker_sets: StickerSets
@staticmethod
def read(q: dict) -> UpdateTrendingStickerSets:
return UpdateTrendingStickerSets.construct(**q)
class UpdateUnreadChatCount(Update):
"""
Number of unread chats, i.e. with unread messages or marked as unread, has changed. This update is sent only if the message database is used
:param chat_list: The chat list with changed number of unread messages
:type chat_list: :class:`ChatList`
:param total_count: Approximate total number of chats in the chat list
:type total_count: :class:`int`
:param unread_count: Total number of unread chats
:type unread_count: :class:`int`
:param unread_unmuted_count: Total number of unread unmuted chats
:type unread_unmuted_count: :class:`int`
:param marked_as_unread_count: Total number of chats marked as unread
:type marked_as_unread_count: :class:`int`
:param marked_as_unread_unmuted_count: Total number of unmuted chats marked as unread
:type marked_as_unread_unmuted_count: :class:`int`
"""
ID: str = Field("updateUnreadChatCount", alias="@type")
chat_list: ChatList
total_count: int
unread_count: int
unread_unmuted_count: int
marked_as_unread_count: int
marked_as_unread_unmuted_count: int
@staticmethod
def read(q: dict) -> UpdateUnreadChatCount:
return UpdateUnreadChatCount.construct(**q)
class UpdateUnreadMessageCount(Update):
"""
Number of unread messages in a chat list has changed. This update is sent only if the message database is used
:param chat_list: The chat list with changed number of unread messages
:type chat_list: :class:`ChatList`
:param unread_count: Total number of unread messages
:type unread_count: :class:`int`
:param unread_unmuted_count: Total number of unread messages in unmuted chats
:type unread_unmuted_count: :class:`int`
"""
ID: str = Field("updateUnreadMessageCount", alias="@type")
chat_list: ChatList
unread_count: int
unread_unmuted_count: int
@staticmethod
def read(q: dict) -> UpdateUnreadMessageCount:
return UpdateUnreadMessageCount.construct(**q)
class UpdateUser(Update):
"""
Some data of a user has changed. This update is guaranteed to come before the user identifier is returned to the application
:param user: New data about the user
:type user: :class:`User`
"""
ID: str = Field("updateUser", alias="@type")
user: User
@staticmethod
def read(q: dict) -> UpdateUser:
return UpdateUser.construct(**q)
class UpdateUserFullInfo(Update):
"""
Some data in userFullInfo has been changed
:param user_id: User identifier
:type user_id: :class:`int`
:param user_full_info: New full information about the user
:type user_full_info: :class:`UserFullInfo`
"""
ID: str = Field("updateUserFullInfo", alias="@type")
user_id: int
user_full_info: UserFullInfo
@staticmethod
def read(q: dict) -> UpdateUserFullInfo:
return UpdateUserFullInfo.construct(**q)
class UpdateUserPrivacySettingRules(Update):
"""
Some privacy setting rules have been changed
:param setting: The privacy setting
:type setting: :class:`UserPrivacySetting`
:param rules: New privacy rules
:type rules: :class:`UserPrivacySettingRules`
"""
ID: str = Field("updateUserPrivacySettingRules", alias="@type")
setting: UserPrivacySetting
rules: UserPrivacySettingRules
@staticmethod
def read(q: dict) -> UpdateUserPrivacySettingRules:
return UpdateUserPrivacySettingRules.construct(**q)
class UpdateUserStatus(Update):
"""
The user went online or offline
:param user_id: User identifier
:type user_id: :class:`int`
:param status: New status of the user
:type status: :class:`UserStatus`
"""
ID: str = Field("updateUserStatus", alias="@type")
user_id: int
status: UserStatus
@staticmethod
def read(q: dict) -> UpdateUserStatus:
return UpdateUserStatus.construct(**q)
class UpdateUsersNearby(Update):
"""
The list of users nearby has changed. The update is guaranteed to be sent only 60 seconds after a successful searchChatsNearby request
:param users_nearby: The new list of users nearby
:type users_nearby: :class:`list[ChatNearby]`
"""
ID: str = Field("updateUsersNearby", alias="@type")
users_nearby: list[ChatNearby]
@staticmethod
def read(q: dict) -> UpdateUsersNearby:
return UpdateUsersNearby.construct(**q)
| 29.203626 | 318 | 0.692711 | 7,332 | 62,817 | 5.828423 | 0.081151 | 0.016568 | 0.021763 | 0.043057 | 0.335166 | 0.270745 | 0.224622 | 0.187977 | 0.156058 | 0.148477 | 0 | 0.000143 | 0.219558 | 62,817 | 2,150 | 319 | 29.217209 | 0.871535 | 0.462502 | 0 | 0.265499 | 1 | 0 | 0.084504 | 0.045447 | 0 | 0 | 0 | 0 | 0 | 1 | 0.123989 | false | 0 | 0.08221 | 0.123989 | 0.876011 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
220e0c7e4d7e7b9e561c692a325977f16ecf70b4 | 153 | py | Python | built-in/TensorFlow/Official/cv/image_classification/ResnetVariant_for_TensorFlow/automl/vega/algorithms/nas/sm_nas/mmdet_meta_cfgs/bbox_head/__init__.py | Huawei-Ascend/modelzoo | df51ed9c1d6dbde1deef63f2a037a369f8554406 | [
"Apache-2.0"
] | 12 | 2020-12-13T08:34:24.000Z | 2022-03-20T15:17:17.000Z | built-in/TensorFlow/Official/cv/image_classification/ResnetVariant_for_TensorFlow/automl/vega/algorithms/nas/sm_nas/mmdet_meta_cfgs/bbox_head/__init__.py | Huawei-Ascend/modelzoo | df51ed9c1d6dbde1deef63f2a037a369f8554406 | [
"Apache-2.0"
] | 3 | 2021-03-31T20:15:40.000Z | 2022-02-09T23:50:46.000Z | built-in/TensorFlow/Research/cv/image_classification/Darts_for_TensorFlow/automl/vega/algorithms/nas/sm_nas/mmdet_meta_cfgs/bbox_head/__init__.py | Huawei-Ascend/modelzoo | df51ed9c1d6dbde1deef63f2a037a369f8554406 | [
"Apache-2.0"
] | 2 | 2021-07-10T12:40:46.000Z | 2021-12-17T07:55:15.000Z | from .cascade_head import CascadeFCBBoxHead
from .convfc_bbox_head import SharedFCBBoxHead
__all__ = [
'CascadeFCBBoxHead',
'SharedFCBBoxHead']
| 21.857143 | 46 | 0.79085 | 14 | 153 | 8.142857 | 0.642857 | 0.175439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143791 | 153 | 6 | 47 | 25.5 | 0.870229 | 0 | 0 | 0 | 0 | 0 | 0.215686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
22159f4a1c6d6e72ce319e5cebbbcc4d51c13acd | 2,205 | py | Python | win/devkit/other/pymel/extras/completion/py/maya/app/edl/importExport.py | leegoonz/Maya-devkit | b81fe799b58e854e4ef16435426d60446e975871 | [
"ADSL"
] | 10 | 2018-03-30T16:09:02.000Z | 2021-12-07T07:29:19.000Z | win/devkit/other/pymel/extras/completion/py/maya/app/edl/importExport.py | leegoonz/Maya-devkit | b81fe799b58e854e4ef16435426d60446e975871 | [
"ADSL"
] | null | null | null | win/devkit/other/pymel/extras/completion/py/maya/app/edl/importExport.py | leegoonz/Maya-devkit | b81fe799b58e854e4ef16435426d60446e975871 | [
"ADSL"
] | 9 | 2018-06-02T09:18:49.000Z | 2021-12-20T09:24:35.000Z | import tempfile
import maya.OpenMaya as OpenMaya
import maya.OpenMayaRender as OpenMayaRender
import maya.OpenMayaMPx as OpenMayaMPx
import maya.cmds as cmds
import maya
import re
from maya.app.edl.fcp import *
class ImportExport(OpenMayaMPx.MPxCommand):
def __del__(self):
pass
def __init__(self):
pass
class Exporter(ImportExport):
def __init__(self):
pass
def doIt(self, fileName):
pass
def setAllowPlayblast(self, allow):
"""
If true, will re-playblast of all shots whose clips are out of date
or non-existent.
"""
pass
class Importer(ImportExport):
def __init__(self):
pass
def doIt(self, fileName):
"""
Reads an EDL file into Maya. Will generate shots, tracks and audio in Maya that
corresponds to the tracks and clips in the EDL.
"""
pass
def setStartFrameOverride(self, frame):
pass
def _setTimeCode(timecode):
pass
def doExport(fileName, allowPlayblast):
"""
Exports the Maya sequence using the EDL Exporter class.
"""
pass
def doMel(*args, **kwargs):
"""
Takes as input a string containing MEL code, evaluates it, and returns the result.
This function takes a string which contains MEL code and evaluates it using
the MEL interpreter. The result is converted into a Python data type and is
returned.
If an error occurs during the execution of the MEL script, a Python exception
is raised with the appropriate error message.
"""
pass
def audioClipCompare(a, b):
pass
def _getValidClipObjectName(clipName, isVideo):
pass
def doImport(fileName, useStartFrameOverride, startFrame):
"""
Imports the specified file using the EDL Importer class.
"""
pass
def _nameToNode(name):
pass
def getTimeCode():
pass
def videoClipCompare(a, b):
pass
def getShotsResolution():
"""
Returns the video resolution of the sequencer if all the shots have the same resolution
Otherwise it returns False, 0, 0
"""
pass
mayaFrameRates = {}
| 17.64 | 92 | 0.647166 | 267 | 2,205 | 5.273408 | 0.460674 | 0.074574 | 0.023438 | 0.03196 | 0.065341 | 0.065341 | 0.065341 | 0.065341 | 0.065341 | 0 | 0 | 0.001274 | 0.287982 | 2,205 | 124 | 93 | 17.782258 | 0.895541 | 0.372336 | 0 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.375 | 0.25 | 0 | 0.6875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
22189dbba6fcdc9b59fa2a428105a701aaaf4a2f | 1,040 | py | Python | packages/mcni/python/mcni/instrument_simulator/__init__.py | mcvine/mcvine | 42232534b0c6af729628009bed165cd7d833789d | [
"BSD-3-Clause"
] | 5 | 2017-01-16T03:59:47.000Z | 2020-06-23T02:54:19.000Z | packages/mcni/python/mcni/instrument_simulator/__init__.py | mcvine/mcvine | 42232534b0c6af729628009bed165cd7d833789d | [
"BSD-3-Clause"
] | 293 | 2015-10-29T17:45:52.000Z | 2022-01-07T16:31:09.000Z | packages/mcni/python/mcni/instrument_simulator/__init__.py | mcvine/mcvine | 42232534b0c6af729628009bed165cd7d833789d | [
"BSD-3-Clause"
] | 1 | 2019-05-25T00:53:31.000Z | 2019-05-25T00:53:31.000Z | #!/usr/bin/env python
#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Michael A.G. Aivazis
# California Institute of Technology
# (C) 2006-2010 All Rights Reserved
#
# <LicenseText>
#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
## Note:
## 1. This package depends on dsm
def copyright():
return "mcni.instrument_simulators module: Copyright (c) 2006-2010 Jiao Lin";
def simulator(neutron_coordinates_transformer):
t = neutron_coordinates_transformer
from .AbstractInstrumentSimulator import AbstractInstrumentSimulator as base
class Simulator(base):
neutron_coordinates_transformer = t
pass
return Simulator()
from mcni.neutron_coordinates_transformers import default as default_neutron_coordinates_transformer
default_simulator = simulator( default_neutron_coordinates_transformer )
# version
__id__ = "$Id$"
# End of file
| 26.666667 | 100 | 0.575 | 89 | 1,040 | 6.494382 | 0.595506 | 0.186851 | 0.250865 | 0.103806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020732 | 0.211538 | 1,040 | 38 | 101 | 27.368421 | 0.684146 | 0.413462 | 0 | 0 | 0 | 0 | 0.120135 | 0.043993 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.083333 | 0.166667 | 0.083333 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
221baabdac2f34fa39aafcaa192dcd1f1b264104 | 176 | py | Python | week06/lecture/examples/src6/2/uppercase0.py | uldash/CS50x | c3ee0f42ad514b57a13c3ffbb96238b3ca3730e1 | [
"MIT"
] | null | null | null | week06/lecture/examples/src6/2/uppercase0.py | uldash/CS50x | c3ee0f42ad514b57a13c3ffbb96238b3ca3730e1 | [
"MIT"
] | null | null | null | week06/lecture/examples/src6/2/uppercase0.py | uldash/CS50x | c3ee0f42ad514b57a13c3ffbb96238b3ca3730e1 | [
"MIT"
] | null | null | null | # Uppercases string one character at a time
from cs50 import get_string
s = get_string("Before: ")
print("After: ", end="")
for c in s:
print(c.upper(), end="")
print()
| 17.6 | 43 | 0.653409 | 28 | 176 | 4.035714 | 0.714286 | 0.159292 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013986 | 0.1875 | 176 | 9 | 44 | 19.555556 | 0.776224 | 0.232955 | 0 | 0 | 0 | 0 | 0.120301 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
223355bd5379be3ac4c24bf1261412562ebdf029 | 96 | py | Python | baekjoon/easy-math/17362-finger.py | honux77/algorithm | 2ed8cef1fbee7ad96d8f2ae583666d52bd8892ee | [
"MIT"
] | 2 | 2019-02-08T01:23:07.000Z | 2020-11-19T12:23:52.000Z | baekjoon/easy-math/17362-finger.py | honux77/algorithm | 2ed8cef1fbee7ad96d8f2ae583666d52bd8892ee | [
"MIT"
] | null | null | null | baekjoon/easy-math/17362-finger.py | honux77/algorithm | 2ed8cef1fbee7ad96d8f2ae583666d52bd8892ee | [
"MIT"
] | null | null | null | n = int(input()) % 8
if n == 0:
print(2)
elif n <= 5:
print(n)
else:
print(10 - n)
| 12 | 20 | 0.458333 | 18 | 96 | 2.444444 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0.333333 | 96 | 7 | 21 | 13.714286 | 0.59375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
22574e100d92910a2acaa1fb84cd7d78b47e8242 | 2,189 | py | Python | libs/sqlservice/utils.py | smartadvising/smartadvising-api | 74cfcc336c87523fcb011a96bc4506ecdef93afe | [
"MIT"
] | null | null | null | libs/sqlservice/utils.py | smartadvising/smartadvising-api | 74cfcc336c87523fcb011a96bc4506ecdef93afe | [
"MIT"
] | null | null | null | libs/sqlservice/utils.py | smartadvising/smartadvising-api | 74cfcc336c87523fcb011a96bc4506ecdef93afe | [
"MIT"
] | null | null | null | """
Utilities
---------
The utilities module.
"""
from collections.abc import Mapping, Sequence
from functools import wraps
import types
class FrozenDict(Mapping):
"""A frozen dictionary implementation that prevents the object from being
mutated. This is primarily used when defining a dict-like object as a class
attribute that shouldn't be mutated by subclasses.
"""
def __init__(self, *args, **kwargs):
self._dict = dict(*args, **kwargs)
def copy(self):
return self._dict.copy()
def __getitem__(self, key):
return self._dict.__getitem__(key)
def __contains__(self, item):
return self._dict.__contains__(item)
def __iter__(self):
return self._dict.__iter__()
def __len__(self):
return self._dict.__len__()
def __repr__(self):
return '<%s %r>' % (self.__class__.__name__, self._dict)
def classonce(meth):
"""Decorator that executes a class method once, stores the results at the
class level, and subsequently returns those results for every future method
call.
"""
@wraps(meth)
def decorated(cls, *args, **kargs):
cached_attr = '__{0}'.format(meth.__name__)
if not hasattr(cls, cached_attr):
result = meth(cls, *args, **kargs)
setattr(cls, cached_attr, result)
return getattr(cls, cached_attr)
return decorated
def flatten(seq):
"""Flatten `seq` a single level deep."""
for item in seq:
if is_sequence(item):
for itm in item:
yield itm
else:
yield item
def is_sequence(value):
"""Test if `value` is a sequence but ``str``. This function is mainly used
to determine if `value` can be treated like a ``list`` for iteration
purposes.
"""
return (is_generator(value) or
(isinstance(value, Sequence) and not isinstance(value, str)))
def is_generator(value):
"""Return whether `value` is a generator or generator-like."""
return (isinstance(value, types.GeneratorType) or
(hasattr(value, '__iter__') and hasattr(value, '__next__') and
not hasattr(value, '__getitem__')))
| 27.708861 | 79 | 0.640018 | 274 | 2,189 | 4.817518 | 0.412409 | 0.042424 | 0.05303 | 0.040909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000609 | 0.250343 | 2,189 | 78 | 80 | 28.064103 | 0.803778 | 0.290544 | 0 | 0 | 0 | 0 | 0.026334 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.292683 | false | 0 | 0.073171 | 0.146341 | 0.634146 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
226437962414de4509b79b7a803dd031ebb02932 | 361 | py | Python | py/2017/3B.py | pedrotari7/advent_of_code | 98d5bc8d903435624a019a5702f5421d7b4ef8c8 | [
"MIT"
] | null | null | null | py/2017/3B.py | pedrotari7/advent_of_code | 98d5bc8d903435624a019a5702f5421d7b4ef8c8 | [
"MIT"
] | null | null | null | py/2017/3B.py | pedrotari7/advent_of_code | 98d5bc8d903435624a019a5702f5421d7b4ef8c8 | [
"MIT"
] | null | null | null | a = 289326
coords = [(1, 0), (1, -1), (0, -1), (-1, -1), (-1, 0), (-1, 1), (0, 1), (1, 1)]
x,y = (0,0)
dx,dy = (1,0)
M = {(x,y):1}
while M[(x, y)] < a:
x, y = x+dx, y+dy
M[(x, y)] = sum([M[(x+ox, y+oy)] for ox,oy in coords if (x+ox,y+oy) in M])
if (x == y) or (x > 0 and x == 1-y) or (x < 0 and x == -y):
dx, dy = -dy, dx
print M[(x, y)] | 24.066667 | 79 | 0.382271 | 87 | 361 | 1.586207 | 0.229885 | 0.115942 | 0.086957 | 0.115942 | 0.246377 | 0.246377 | 0.115942 | 0.115942 | 0 | 0 | 0 | 0.116732 | 0.288089 | 361 | 15 | 80 | 24.066667 | 0.420233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.090909 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
97db587e34c2af72ba15568d5a03261d228ebb29 | 3,546 | py | Python | test/IECoreRI/All.py | gcodebackups/cortex-vfx | 72fa6c6eb3327fce4faf01361c8fcc2e1e892672 | [
"BSD-3-Clause"
] | 5 | 2016-07-26T06:09:28.000Z | 2022-03-07T03:58:51.000Z | test/IECoreRI/All.py | turbosun/cortex | 4bdc01a692652cd562f3bfa85f3dae99d07c0b15 | [
"BSD-3-Clause"
] | null | null | null | test/IECoreRI/All.py | turbosun/cortex | 4bdc01a692652cd562f3bfa85f3dae99d07c0b15 | [
"BSD-3-Clause"
] | 3 | 2015-03-25T18:45:24.000Z | 2020-02-15T15:37:18.000Z | ##########################################################################
#
# Copyright (c) 2007-2013, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# * Neither the name of Image Engine Design nor the names of any
# other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
import sys
import unittest
import IECore
import IECoreRI
from SLOReader import *
from Renderer import *
from Instancing import *
from PTCParticleReader import *
from PTCParticleWriter import *
from ArchiveRecord import *
from DoubleSided import *
from Orientation import *
from MultipleContextsTest import *
from Camera import *
from CurvesTest import *
from TextureOrientationTest import *
from ArrayPrimVarTest import *
from CoordinateSystemTest import *
from IlluminateTest import *
from SubsurfaceTest import *
from PatchMeshTest import *
from RIBWriterTest import *
from ParameterisedProcedural import *
from MotionTest import MotionTest
from PythonProceduralTest import PythonProceduralTest
from DetailTest import DetailTest
from ProceduralThreadingTest import ProceduralThreadingTest
from StringArrayParameterTest import StringArrayParameterTest
from CoshaderTest import CoshaderTest
from GroupTest import GroupTest
from DspyTest import DspyTest
from RerenderingTest import RerenderingTest
if hasattr( IECoreRI, "SXRenderer" ) :
from SXRendererTest import SXRendererTest
if hasattr( IECoreRI, "GXEvaluator" ) :
from GXEvaluatorTest import GXEvaluatorTest
if hasattr( IECoreRI, "DTEXDeepImageReader" ) :
from DTEXDeepImageReaderTest import TestDTEXDeepImageReader
from DTEXDeepImageWriterTest import TestDTEXDeepImageWriter
if hasattr( IECoreRI, "SHWDeepImageReader" ) :
from SHWDeepImageReaderTest import TestSHWDeepImageReader
from SHWDeepImageWriterTest import TestSHWDeepImageWriter
if IECore.withFreeType() :
from TextTest import *
unittest.TestProgram(
testRunner = unittest.TextTestRunner(
stream = IECore.CompoundStream(
[
sys.stderr,
open( "test/IECoreRI/resultsPython.txt", "w" )
]
),
verbosity = 2
)
)
| 36.183673 | 76 | 0.758037 | 394 | 3,546 | 6.822335 | 0.477157 | 0.070685 | 0.025298 | 0.017113 | 0.068452 | 0.050595 | 0.050595 | 0.050595 | 0.050595 | 0.050595 | 0 | 0.003014 | 0.157924 | 3,546 | 97 | 77 | 36.556701 | 0.897187 | 0.44388 | 0 | 0 | 0 | 0 | 0.050251 | 0.017309 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.709091 | 0 | 0.709091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
97e4ff9556a184829362cc46861ffd16d6689ddb | 870 | py | Python | transit/helpers.py | moredatarequired/python-stitch-client | 222ba24e34614d3acecab41cd78a5c78ab8ea782 | [
"Apache-2.0"
] | 71 | 2015-01-03T07:55:33.000Z | 2021-10-30T16:52:09.000Z | transit/helpers.py | moredatarequired/python-stitch-client | 222ba24e34614d3acecab41cd78a5c78ab8ea782 | [
"Apache-2.0"
] | 27 | 2015-01-02T06:10:25.000Z | 2022-02-20T21:54:13.000Z | transit/helpers.py | moredatarequired/python-stitch-client | 222ba24e34614d3acecab41cd78a5c78ab8ea782 | [
"Apache-2.0"
] | 20 | 2015-01-05T04:07:52.000Z | 2022-02-20T19:08:15.000Z | ## Copyright 2014 Cognitect. All Rights Reserved.
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS-IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
import itertools
from transit.pyversion import imap, izip
def mapcat(f, i):
return itertools.chain.from_iterable(imap(f, i))
def pairs(i):
return izip(*[iter(i)] * 2)
cycle = itertools.cycle
def take(n, i):
return itertools.islice(i, 0, n)
| 27.1875 | 75 | 0.725287 | 133 | 870 | 4.736842 | 0.639098 | 0.095238 | 0.04127 | 0.050794 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013908 | 0.173563 | 870 | 31 | 76 | 28.064516 | 0.862309 | 0.654023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.222222 | 0.333333 | 0.888889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
97e73f20826e580f553c50fa8510c0e35ee9a048 | 365 | py | Python | blsqpy/query.py | BLSQ/blsqpy | 52fcbd655780e78eccceb2a61280262194c2416c | [
"MIT"
] | null | null | null | blsqpy/query.py | BLSQ/blsqpy | 52fcbd655780e78eccceb2a61280262194c2416c | [
"MIT"
] | 7 | 2018-12-18T10:11:34.000Z | 2019-03-27T07:09:38.000Z | blsqpy/query.py | BLSQ/blsqpy | 52fcbd655780e78eccceb2a61280262194c2416c | [
"MIT"
] | 2 | 2018-12-12T12:31:40.000Z | 2019-02-25T12:34:48.000Z | import os
from jinja2 import Environment, FileSystemLoader
QUERIES_DIR = os.path.dirname(os.path.abspath(__file__))
def get_query(query_name, params):
j2_env = Environment(loader=FileSystemLoader(QUERIES_DIR+"/queries"),
trim_blocks=True)
return j2_env.get_template(query_name+'.sql').render(**params)+"\n -- query : "+query_name
| 36.5 | 94 | 0.715068 | 47 | 365 | 5.255319 | 0.595745 | 0.109312 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009772 | 0.158904 | 365 | 9 | 95 | 40.555556 | 0.794788 | 0 | 0 | 0 | 0 | 0 | 0.071233 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
3f0241d966136442d63f54ae450fa5bbf000c236 | 883 | py | Python | systems/stage.py | will-nickson/starter_system | bce669250fc58c3966c71e84020e078871a79e4f | [
"MIT"
] | null | null | null | systems/stage.py | will-nickson/starter_system | bce669250fc58c3966c71e84020e078871a79e4f | [
"MIT"
] | null | null | null | systems/stage.py | will-nickson/starter_system | bce669250fc58c3966c71e84020e078871a79e4f | [
"MIT"
] | null | null | null | from log.logger import logger
class SystemStage(object):
"""
Default stage object: creates a SystemStage for doing something
"""
@property
def name(self):
return "Need to replace name when inheriting"
def __repr__(self):
return "SystemStage '%s' Try %s.methods()" % (
self.name,
self.name,
)
def methods(self):
return get_methods(self)
def system_init(self, system: System):
# method called once we have a system
self._parent = system
# and a log
log = system.log.setup(stage=self.name)
self._log = log
@property
def log(self) -> logger:
log = getattr(self, "_log", logtoscreen(""))
return log
@property
def parent(self) -> System:
parent = getattr(self, "_parent", None)
return parent
| 22.641026 | 71 | 0.571914 | 101 | 883 | 4.90099 | 0.415842 | 0.066667 | 0.048485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.327293 | 883 | 38 | 72 | 23.236842 | 0.833333 | 0.124575 | 0 | 0.208333 | 0 | 0 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.041667 | 0.125 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3f0acb5cf9be9113370cabc267dfa5dafd6e50f5 | 895 | py | Python | survol/sources_types/oracle/library/__init__.py | AugustinMascarelli/survol | 7a822900e82d1e6f016dba014af5741558b78f15 | [
"BSD-3-Clause"
] | null | null | null | survol/sources_types/oracle/library/__init__.py | AugustinMascarelli/survol | 7a822900e82d1e6f016dba014af5741558b78f15 | [
"BSD-3-Clause"
] | null | null | null | survol/sources_types/oracle/library/__init__.py | AugustinMascarelli/survol | 7a822900e82d1e6f016dba014af5741558b78f15 | [
"BSD-3-Clause"
] | null | null | null | """
Oracle library
"""
import lib_common
from lib_properties import pc
def Graphic_colorbg():
return "#CC99FF"
def EntityOntology():
return ( ["Db", "Schema", "Library"], )
# Ambiguity with tables, oracle or normal users.
def MakeUri(dbName,schemaName,libraryName):
return lib_common.gUriGen.UriMakeFromDict("oracle/library", { "Db" : dbName, "Schema" : schemaName, "Library" : libraryName } )
def AddInfo(grph,node,entity_ids_arr):
# TODO: SPECIAL. Imported here to avoid circular inclusions, see oracle/package_body/__init__.py
from sources_types.oracle import schema as oracle_schema
argDb = entity_ids_arr[0]
argSchema = entity_ids_arr[1]
node_oraschema = oracle_schema.MakeUri( argDb, argSchema )
grph.add( ( node_oraschema, pc.property_oracle_library, node ) )
def EntityName(entity_ids_arr):
return entity_ids_arr[0] + "." + entity_ids_arr[1] + "." + entity_ids_arr[2]
| 28.870968 | 128 | 0.750838 | 120 | 895 | 5.35 | 0.491667 | 0.098131 | 0.130841 | 0.040498 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008974 | 0.128492 | 895 | 30 | 129 | 29.833333 | 0.814103 | 0.175419 | 0 | 0 | 0 | 0 | 0.072702 | 0 | 0 | 0 | 0 | 0.033333 | 0 | 1 | 0.3125 | false | 0 | 0.1875 | 0.25 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
3f1f9aba8ecf3aa6254017a10062ec1345e2b069 | 2,943 | py | Python | tests/factorys.py | 2h4dl/pymilvus | 6af6d4922242ae48d90ed5a1afb891d9e4d1540e | [
"Apache-2.0"
] | null | null | null | tests/factorys.py | 2h4dl/pymilvus | 6af6d4922242ae48d90ed5a1afb891d9e4d1540e | [
"Apache-2.0"
] | null | null | null | tests/factorys.py | 2h4dl/pymilvus | 6af6d4922242ae48d90ed5a1afb891d9e4d1540e | [
"Apache-2.0"
] | null | null | null | # STL imports
import random
import logging
import string
import time
import datetime
import random
import struct
import sys
from functools import wraps
# Third party imports
import numpy as np
import faker
from faker.providers import BaseProvider
logging.getLogger('faker').setLevel(logging.ERROR)
sys.path.append('.')
# grpc
from milvus.grpc_gen import milvus_pb2
def gen_vectors(num, dim):
return [[random.random() for _ in range(dim)] for _ in range(num)]
def gen_single_vector(dim):
return [[random.random() for _ in range(dim)]]
def gen_vector(nb, d, seed=np.random.RandomState(1234)):
xb = seed.rand(nb, d).astype("float32")
return xb.tolist()
def gen_unique_str(str=None):
prefix = "".join(random.choice(string.ascii_letters + string.digits) for _ in range(8))
return prefix if str is None else str + "_" + prefix
def get_current_day():
return time.strftime('%Y-%m-%d', time.localtime())
def get_last_day(day):
tmp = datetime.datetime.now() - datetime.timedelta(days=day)
return tmp.strftime('%Y-%m-%d')
def get_next_day(day):
tmp = datetime.datetime.now() + datetime.timedelta(days=day)
return tmp.strftime('%Y-%m-%d')
def gen_long_str(num):
string = ''
for _ in range(num):
char = random.choice('tomorrow')
string += char
def gen_one_binary(topk):
ids = [random.randrange(10000000, 99999999) for _ in range(topk)]
distances = [random.random() for _ in range(topk)]
return milvus_pb2.TopKQueryResult(struct.pack(str(topk) + 'l', *ids), struct.pack(str(topk) + 'd', *distances))
def gen_nq_binaries(nq, topk):
return [gen_one_binary(topk) for _ in range(nq)]
def fake_query_bin_result(nq, topk):
return gen_nq_binaries(nq, topk)
class FakerProvider(BaseProvider):
def collection_name(self):
return 'collection_names' + str(random.randint(1000, 9999))
def name(self):
return 'name' + str(random.randint(1000, 9999))
def dim(self):
return random.randint(0, 999)
fake = faker.Faker()
fake.add_provider(FakerProvider)
def collection_name_factory():
return fake.collection_name()
def records_factory(dimension, nq):
return [[random.random() for _ in range(dimension)] for _ in range(nq)]
def binary_records_factory(dimension, nq):
def binary_record(bsize):
s_m = "abcdefghijklmnopqrstuvwxyz"
s_list = [s_m[random.randint(0, 25)] for _ in range(bsize)]
s = "".join(s_list)
return bytes(s, encoding="ASCII")
bs = dimension // 8
return [binary_record(bs) for _ in range(nq)]
def integer_factory(nq):
return [random.randint(0, 128) for _ in range(nq)]
def time_it(func):
@wraps(func)
def inner(*args, **kwrgs):
pref = time.perf_counter()
result = func(*args, **kwrgs)
delt = time.perf_counter() - pref
print(f"[{func.__name__}][{delt:.4}s]")
return result
return inner
| 23.357143 | 115 | 0.675841 | 416 | 2,943 | 4.622596 | 0.324519 | 0.033801 | 0.067603 | 0.035361 | 0.218409 | 0.156006 | 0.113365 | 0.113365 | 0.078003 | 0.078003 | 0 | 0.02267 | 0.190622 | 2,943 | 125 | 116 | 23.544 | 0.784635 | 0.012232 | 0 | 0.051282 | 0 | 0 | 0.044092 | 0.018946 | 0 | 0 | 0 | 0 | 0 | 1 | 0.269231 | false | 0 | 0.166667 | 0.141026 | 0.705128 | 0.012821 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3f221d9155ff841349fd18bfe1fa0dbaac313b9d | 260 | py | Python | Algorithms/LCP/29/math1.py | M-Quadra/LeetCode-problems | 0cc100aa1e50b02df289f04fe2e0b97239eb9895 | [
"MIT"
] | null | null | null | Algorithms/LCP/29/math1.py | M-Quadra/LeetCode-problems | 0cc100aa1e50b02df289f04fe2e0b97239eb9895 | [
"MIT"
] | null | null | null | Algorithms/LCP/29/math1.py | M-Quadra/LeetCode-problems | 0cc100aa1e50b02df289f04fe2e0b97239eb9895 | [
"MIT"
] | null | null | null | class Solution:
def orchestraLayout(self, num: int, xPos: int, yPos: int) -> int:
a, b = (min(xPos, num-1-yPos), 1) if yPos >= xPos else (min(yPos, num-1-xPos), -1)
return (4*num*a - 4*a*a - 2*a + b*(xPos+yPos) + (b>>1&1)*4*(num-a-1))%9 + 1 | 65 | 90 | 0.546154 | 51 | 260 | 2.784314 | 0.372549 | 0.028169 | 0.070423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064039 | 0.219231 | 260 | 4 | 91 | 65 | 0.635468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
3f2bff3972bb90e7ea59576e6eccf7d56961ec7e | 196 | py | Python | Voting/urls.py | Poornartha/Odonata | 71e8dfc4e8d93c6ecc1a3a155459b7e43bd28cdb | [
"MIT"
] | null | null | null | Voting/urls.py | Poornartha/Odonata | 71e8dfc4e8d93c6ecc1a3a155459b7e43bd28cdb | [
"MIT"
] | null | null | null | Voting/urls.py | Poornartha/Odonata | 71e8dfc4e8d93c6ecc1a3a155459b7e43bd28cdb | [
"MIT"
] | null | null | null | from django.urls import path
from .views import teams_all, team_vote
urlpatterns = [
path('teams/all', teams_all, name="teams_all"),
path('teams/<int:pk>', team_vote, name="team_vote"),
] | 28 | 56 | 0.704082 | 30 | 196 | 4.4 | 0.466667 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137755 | 196 | 7 | 57 | 28 | 0.781065 | 0 | 0 | 0 | 0 | 0 | 0.208122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
3f348185cd12109292cb8c384d3bec9afb87b02b | 193 | py | Python | serve.py | haiyoumeiyou/cherrybrigde | f00a0592240b60cc42b895ad194b0273485956d0 | [
"BSD-3-Clause"
] | null | null | null | serve.py | haiyoumeiyou/cherrybrigde | f00a0592240b60cc42b895ad194b0273485956d0 | [
"BSD-3-Clause"
] | null | null | null | serve.py | haiyoumeiyou/cherrybrigde | f00a0592240b60cc42b895ad194b0273485956d0 | [
"BSD-3-Clause"
] | null | null | null | from application import bootstrap
bootstrap()
if __name__=='__main__':
import cherrypy
cherrypy.engine.signals.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
| 19.3 | 39 | 0.720207 | 20 | 193 | 6.55 | 0.65 | 0.320611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176166 | 193 | 10 | 40 | 19.3 | 0.823899 | 0 | 0 | 0 | 0 | 0 | 0.041237 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3f473e7173cd4e6d679a1656ee0296fc204724d2 | 166 | py | Python | code/taskB/models.py | nft-appraiser/nft-appraiser-api | 6d6495049851afd3d9bfc6969d0e1c9bc430dc81 | [
"MIT"
] | null | null | null | code/taskB/models.py | nft-appraiser/nft-appraiser-api | 6d6495049851afd3d9bfc6969d0e1c9bc430dc81 | [
"MIT"
] | null | null | null | code/taskB/models.py | nft-appraiser/nft-appraiser-api | 6d6495049851afd3d9bfc6969d0e1c9bc430dc81 | [
"MIT"
] | null | null | null | from django.db import models
class TaskB_table(models.Model):
img = models.ImageField(upload_to='taskB/', default='defo')
pred_price = models. FloatField()
| 23.714286 | 63 | 0.728916 | 22 | 166 | 5.363636 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 166 | 6 | 64 | 27.666667 | 0.830986 | 0 | 0 | 0 | 0 | 0 | 0.060241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
3f55d1d3db5efaf77627369621529da7de9da985 | 149 | py | Python | wordpress/apps.py | 2e2a/django-wordpress | 5417d98128ea6ad4308b250fdee65226e7deb628 | [
"BSD-3-Clause"
] | 1 | 2021-12-03T19:55:27.000Z | 2021-12-03T19:55:27.000Z | wordpress/apps.py | 2e2a/django-wordpress | 5417d98128ea6ad4308b250fdee65226e7deb628 | [
"BSD-3-Clause"
] | null | null | null | wordpress/apps.py | 2e2a/django-wordpress | 5417d98128ea6ad4308b250fdee65226e7deb628 | [
"BSD-3-Clause"
] | null | null | null | from django.apps import AppConfig
class WordpressAppConfig(AppConfig):
name = 'wordpress'
default_auto_field = 'django.db.models.AutoField' | 24.833333 | 53 | 0.771812 | 17 | 149 | 6.647059 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14094 | 149 | 6 | 53 | 24.833333 | 0.882813 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 0.173333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
3f59a6465e2607784678cb918b686e6250106802 | 142 | py | Python | ex005-antecessorSucessor/005.py | KaiqueCassal/cursoEmVideoPython | 9d37563045091e4d558e283d47a5a49378e9df71 | [
"MIT"
] | 1 | 2021-08-11T04:38:33.000Z | 2021-08-11T04:38:33.000Z | ex005-antecessorSucessor/005.py | KaiqueCassal/cursoEmVideoPython | 9d37563045091e4d558e283d47a5a49378e9df71 | [
"MIT"
] | null | null | null | ex005-antecessorSucessor/005.py | KaiqueCassal/cursoEmVideoPython | 9d37563045091e4d558e283d47a5a49378e9df71 | [
"MIT"
] | null | null | null | num = int(input('Digite um número inteiro: '))
print(f'O número: {num}'
f'\nO antecessor: {num - 1}'
f'\nO sucessor: {num + 1}')
| 23.666667 | 46 | 0.56338 | 22 | 142 | 3.636364 | 0.636364 | 0.075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018349 | 0.232394 | 142 | 5 | 47 | 28.4 | 0.715596 | 0 | 0 | 0 | 0 | 0 | 0.626761 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3f6013f8688ca16e27ef6533df61114f0ade964b | 22,620 | py | Python | rules/check_imported_dlls.py | deptofdefense/SalSA | 7ef771398e3d59597bade95d0a23540de0842e2a | [
"MIT"
] | 84 | 2018-01-07T19:43:45.000Z | 2021-12-23T14:17:44.000Z | rules/check_imported_dlls.py | deptofdefense/SalSA | 7ef771398e3d59597bade95d0a23540de0842e2a | [
"MIT"
] | 7 | 2018-04-02T20:24:28.000Z | 2019-06-07T21:48:04.000Z | rules/check_imported_dlls.py | deptofdefense/SalSA | 7ef771398e3d59597bade95d0a23540de0842e2a | [
"MIT"
] | 18 | 2017-12-26T19:44:46.000Z | 2021-09-13T12:21:02.000Z | """
Align imported dlls/functions to executable functionality.
"""
import sys
# import all supported ordinal decodings
from rules.ordinal_mappings import advapi32
from rules.ordinal_mappings import cabinet
from rules.ordinal_mappings import comctl32
from rules.ordinal_mappings import mfc42
from rules.ordinal_mappings import msvbvm60
from rules.ordinal_mappings import ntdll
from rules.ordinal_mappings import odbc32
from rules.ordinal_mappings import oleaut32
from rules.ordinal_mappings import oledlg
from rules.ordinal_mappings import propsys
from rules.ordinal_mappings import shell32
from rules.ordinal_mappings import shlwapi
from rules.ordinal_mappings import ws2_32
from rules.ordinal_mappings import wsock32
# create ordinal mappings dictionary
ords2names = {}
ords2names['advapi32.dll'] = advapi32.mapping
ords2names['cabinet.dll'] = cabinet.mapping
ords2names['comctl32.dll'] = comctl32.mapping
ords2names['mfc42.dll'] = mfc42.mapping
ords2names['msvbvm60.dll'] = msvbvm60.mapping
ords2names['ntdll.dll'] = ntdll.mapping
ords2names['odbc32.dll'] = odbc32.mapping
ords2names['oleaut32.dll'] = oleaut32.mapping
ords2names['oledlg.dll'] = oledlg.mapping
ords2names['propsys.dll'] = propsys.mapping
ords2names['shell32.dll'] = shell32.mapping
ords2names['shlwapi.dll'] = shlwapi.mapping
ords2names['ws2_32.dll'] = ws2_32.mapping
ords2names['wsock32.dll'] = wsock32.mapping
# list of targeted functions and their descriptions
targets = {
'accept': 'This function is used to listen for incoming connections. This function indicates that the program will listen for incoming connections on a socket. It is mostly used by malware to communicate with their Command and Communication server.',
'AdjustTokenPrivileges': 'This function is used to enable or disable specific access privileges. In a process injection attack, this function is used by malware to gain additional permissions.',
'AttachThreadInput': 'This function attaches the input processing from one thread to another so that the second thread receives input events such as keyboard and mouse events. Keyloggers and other spyware use this function.',
'bind': 'This function is used to associate a local address to a socket in order to listen for incoming connections.',
'BitBlt': 'This function is used to copy graphic data from one device to another. Spyware sometimes uses this function to capture screenshots.',
'CertOpenSystemStore': 'This function is used to access the certificates stored on the local system.',
'CheckRemoteDebuggerPresent': 'Determines whether the specified process is being debugged. Used by malware to detect and evade reversing.',
'connect': 'This function is used to connect to a remote socket. Malware often uses low-level functionality to connect to a command-and-control server. It is mostly used by malware to communicate with their Command and Communication server.',
'ConnectNamedPipe': 'This function is used to create a server pipe for interprocess communication that will wait for a client pipe to connect. Backdoors and reverse shells sometimes use ConnectNamedPipe to simplify connectivity to a command-and-control server.',
'ControlService': 'This function is used to start, stop, modify, or send a signal to a running service. If malware is using its own malicious service, code needs to be analyzed that implements the service in order to determine the purpose of the call.',
'CreateFile': 'Creates a new file or opens an existing file.',
'CreateFileMapping': 'This function is used to create a handle to a file mapping that loads a file into memory and makes it accessible via memory addresses. Launchers, loaders, and injectors use this function to read and modify PE files.',
'CreateMutex': 'This function creates a mutual exclusion object that can be used by malware to ensure that only a single instance of the malware is running on a system at any given time. Malware often uses fixed names for mutexes, which can be good host-based indicators to detect additional installations of the malware.',
'CreateProcess': 'This function creates and launches a new process. If malware creates a new process, new process needs to be analyzed as well.',
'CreateRemoteThread': 'This function is used to start a thread in a remote process. Launchers and stealth malware use CreateRemoteThread to inject code into a different process.',
'CreateService': 'This function is used to create a service that can be started at boot time. Malware uses CreateService for persistence, stealth, or to load kernel drivers.',
'CreateToolhelp32Snapshot': 'This function is used to create a snapshot of processes, heaps, threads, and modules. Malware often uses this function as part of code that iterates through processes or threads.',
'CryptAcquireContext': 'This function is often the first function used by malware to initialize the use of Windows encryption.',
'DeviceIoControl': 'This function sends a control message from user space to a device driver. Kernel malware that needs to pass information between user space and kernel space often use this function.',
'DllFunctionCall': 'THis function is used to import a DLL within a visual basic executable. This indicates malware with visual basic functionality.',
'EnableExecuteProtectionSupport': 'This function is used to modify the Data Execution Protection (DEP) settings of the host, making it more susceptible to attack.',
'EnumProcesses': 'This function is used to enumerate through running processes on the system. Malware often enumerates through processes to find a process into which to inject.',
'EnumProcessModules': 'This function is used to enumerate the loaded modules (executables and DLLs) for a given process. Malware enumerates through modules when doing an injection.',
'FindFirstFile': 'This function is used to search through a directory and enumerate the file system.',
'FindNextFile': 'This function is used to search through a directory and enumerate the file system.',
'FindResource': 'This function is used to find a resource in an executable or loaded DLL. Malware sometimes uses resources to store strings, configuration information, or other malicious files. If this function is used, then check for an .rsrc section in the malware`s PE header.',
'FindWindow': 'This function is used to search for an open window on the desktop. Sometimes this function is used as an anti-debugging technique to search for OllyDbg windows.',
'FtpOpenFile': 'This function is used to open a file on a remote FTP server.',
'FtpPutFile': 'This function is used to upload a file to remote FTP server.',
'GetAdaptersInfo': 'This function is used to obtain information about the network adapters on the system. Backdoors sometimes call GetAdaptersInfo in the information-gathering phase to gather information about infected machines. In some cases, it`s used to gather MAC addresses to check for VMware as part of anti-virtual machine techniques.',
'GetAsyncKeyState': 'This function is used to determine whether a particular key is being pressed. Malware sometimes uses this function to implement a keylogger.',
'GetClipboardData': 'This function is used to read user clipboard data and is sometimes used in keyloggers.',
'GetDC': 'This function returns a handle to a device context for a window or the whole screen. Spyware that takes screen captures often uses this function.',
'GetForegroundWindow': 'This function returns a handle to the window currently in the foreground of the desktop. Keyloggers commonly use this function to determine in which window the user is entering his keystrokes.',
'gethostbyname': 'This function is used to perform a DNS lookup on a particular hostname prior to making an IP connection to a remote host. Hostnames that serve as command-and-control servers often make good network-based signatures.',
'gethostname': 'This function is used to retrieve the hostname of the computer. Backdoors sometimes use gethostname in information gathering phase of the victim machine.',
'GetKeyState': 'This function is used by keyloggers to obtain the status of a particular key on the keyboard.',
'GetModuleFilename': 'This function returns the filename of a module that is loaded in the current process. Malware can use this function to modify or copy files in the currently running process.',
'GetModuleHandle': 'This function is used to obtain a handle to an already loaded module. Malware may use GetModuleHandle to locate and modify code in a loaded module or to search for a good location to inject code.',
'GetProcAddress': 'This function is used to retrieve the address of a function in a DLL loaded into memory. This is used to import functions from other DLLs in addition to the functions imported in the PE file header.',
'GetStartupInfo': 'This function is used to retrieve a structure containing details about how the current process was configured to run, such as where the standard handles are directed.',
'GetSystemDefaultLangId': 'This function returns the default language settings for the system. These are used by malwares by specifically designed for region-based attacks.',
'GetTempPath': 'This function returns the temporary file path. If malware call this function, check whether it reads or writes any files in the temporary file path.',
'GetThreadContext': 'This function returns the context structure of a given thread. The context for a thread stores all the thread information, such as the register values and current state.',
'GetVersionEx': 'This function returns information about which version of Windows is currently running. This can be used as part of a victim survey, or to select between different offsets for undocumented structures that have changed between different versions of Windows.',
'GetWindowDC': 'This function retrieves the device context (DC) for the entire window, including title bar, menus, and scroll bars. Used to take a screenshot of a particular GUI window (like a browser).',
'GetWindowsDirectory': 'This function returns the file path to the Windows directory (usually C:\\Windows). Malware sometimes uses this call to determine into which directory to install additional malicious programs.',
'GetWindowText': 'This function gets the title of all program windows for the current user. Used to enumerate processes that have a GUI interface.',
'HttpOpenRequest': 'This function sets up the OS resources for an HTTP request.',
'HttpSendRequest': 'This function actually makes an outgoing HTTP connection.',
'inet_addr': 'This function converts an IP address string like 127.0.0.1 so that it can be used by functions such as connect. The string specified can sometimes be used as a network-based signature.',
'InternetOpen': 'This function initializes the high-level Internet access functions from WinINet, such as InternetOpenUrl and InternetReadFile. Searching for InternetOpen is a good way to find the start of Internet access functionality. One of the parameters to InternetOpen is the User-Agent, which can sometimes make a good network-based signature.',
'InternetOpenUrl': 'This function opens a specific URL for a connection using FTP, HTTP, or HTTPS.URLs, if fixed, can often be good network-based signatures.',
'InternetReadFile': 'This function reads data from a previously opened URL.',
'InternetWriteFile': 'This function writes data to a previously opened URL.',
'IsDebuggerPresent': 'Determines whether the calling process is being debugged by a user-mode debugger. Used by malware to detect and evade reversing.',
'IsNTAdmin': 'This function checks if the user has administrator privileges.',
'IsUserAnAdmin': 'This function checks if the user has administrator privileges.',
'IsWoW64Process': 'This function is used by a 32-bit process to determine if it is running on a 64-bit operating system.',
'LdrLoadDll': 'This is a low-level function to load a DLL into a process, just like LoadLibrary. Normal programs use LoadLibrary, and the presence of this import may indicate a program that is attempting to be stealthy.',
'LoadLibrary': 'This is the standard fucntion to load a DLL into a process at runtime.',
'LoadResource': 'This function loads a resource from a PE file into memory. Malware sometimes uses resources to store strings, configuration information, or other malicious files.',
'LsaEnumerateLogonSessions': 'This function is used to enumerate through logon sessions on the current system, which can be used as part of a credential stealer.',
'MapViewOfFile': 'This function is used to map a file into memory and makes the contents of the file accessible via memory addresses. Launchers, loaders, and injectors use this function to read and modify PE files. By using MapViewOfFile, the malware can avoid using WriteFile to modify the contents of a file.',
'MapVirtualKey': 'This function is used to translate a virtual-key code into a character value. It is often used by keylogging malware.',
'Module32First/Module32Next': 'This function is used to enumerate through modules loaded into a process. Injectors use this function to determine where to inject code.',
'NetScheduleJobAdd': 'This function submits a request for a program to be run at a specified date and time. Malware can use NetScheduleJobAdd to run a different program. This is an important indicator to see the program that is scheduled to run at future time.',
'NetShareEnum': 'This function is used to enumerate network shares.',
'NtQueryDirectoryFile': 'This function returns information about files in a directory. Rootkits commonly hook this function in order to hide files.',
'NtQueryInformationProcess': 'This function is used to return various information about a specified process. This function is sometimes used as an anti-debugging technique because it can return the same information as CheckRemoteDebuggerPresent.',
'NtSetInformationProcess': 'This function is used to change the privilege level of a program or to bypass Data Execution Prevention (DEP).',
'OpenMutex': 'This function opens a handle to a mutual exclusion object that can be used by malware to ensure that only a single instance of malware is running on a system at any given time. Malware often uses fixed names for mutexes, which can be good host-based indicators.',
'OpenProcess': 'This function is used to open a handle to another process running on the system. This handle can be used to read and write to the other process memory or to inject code into the other process.',
'OutputDebugString': 'This function is used to output a string to a debugger if one is attached. This can be used as an anti-debugging technique.',
'PeekNamedPipe': 'This function is used to copy data from a named pipe without removing data from the pipe. This function is popular with reverse shells.',
'Process32First': 'This function is used to begin enumerating processes from a previous call to CreateToolhelp32Snapshot. Malware often enumerates through processes to find a process into which to inject.',
'Process32Next': 'This function is used to begin enumerating processes from a previous call to CreateToolhelp32Snapshot. Malware often enumerates through processes to find a process into which to inject.',
'QueueUserAPC': 'This function is used to execute code for a different thread. Malware sometimes uses QueueUserAPC to inject code into another process.',
'ReadProcessMemory': 'This function is used to read the memory of a remote process.',
'recv': 'This function is used to receive data from a remote machine. Malware often uses this function to receive data from a remote command-and-control server.',
'RegCreateKey': 'This function is used to create a handle to a new registry key for reading and editing. Registry keys are sometimes written as a way for software to achieve persistence on a host. The registry also contains a whole host of operating system and application setting information.',
'RegisterHotKey': 'This function is used to register a handler to be notified anytime a user enters a particular key combination (like CTRL-ALT-J), regardless of which window is active when the user presses the key combination. This function is sometimes used by spyware that remains hidden from the user until the key combination is pressed.',
'RegOpenKey': 'This function is used to open a handle to a registry key for reading and editing. Registry keys are sometimes written as a way for software to achieve persistence on a host. The registry also contains a whole host of operating system and application setting information.',
'ResumeThread': 'This function is used to resume a previously suspended thread. ResumeThread is used as part of several injection techniques.',
'RtlCreateRegistryKey': 'This function is used to create a registry from kernel-mode code.',
'RtlWriteRegistryValue': 'This function is used to write a value to the registry from kernel-mode code.',
'SamIConnect': 'This function is used to connect to the Security Account Manager (SAM) in order to make future calls that access credential information. Hash-dumping programs access the SAM database in order to retrieve the hash of users` login passwords.',
'SamIGetPrivateData': 'This function is used to query the private information about a specific user from the Security Account Manager (SAM) database. Hash-dumping programs access the SAM database in order to retrieve the hash of users` login passwords.',
'SamQueryInformationUse': 'This function is used to query information about a specific user in the Security Account Manager (SAM) database. Hash-dumping programs access the SAM database in order to retrieve the hash of users` login passwords.',
'send': 'This function is used to send data to a remote machine. It is often used by malwares to send data to a remote command-and-control server.',
'SetFileTime': 'This function is used to modify the creation, access, or last modified time of a file. Malware often uses this function to conceal malicious activity.',
'SetThreadContext': 'This function is used to modify the context of a given thread. Some injection techniques use SetThreadContext.',
'SetWindowsHookEx': 'This function is used to set a hook function to be called whenever a certain event is called. Commonly used with keyloggers and spyware, this function also provides an easy way to load a DLL into all GUI processes on the system. This function is sometimes added by the compiler.',
'SfcTerminateWatcherThread': 'This function is used to disable Windows file protection and modify files that otherwise would be protected.',
'ShellExecute': 'This function is used to execute another program.',
'StartServiceCtrlDispatcher': 'This function is used by a service to connect the main thread of the process to the service control manager. Any process that runs as a service must call this function within 30 seconds of startup. Locating this function in malware will tell that the function should be run as a service.',
'SQLConnect': 'This function establishes a connection with a driver and data source to allow for data to be shared with the driver/data source.',
'SuspendThread': 'This function is used to suspend a thread so that it stops running. Malware will sometimes suspend a thread in order to modify it by performing code injection.',
'System': 'This function is used to run another program provided by some C runtime libraries. On Windows, this function serves as a wrapper function to CreateProcess.',
'Thread32First/Thread32Next': 'This function is used to iterate through the threads of a process. Injectors use these functions to find an appropriate thread into which to inject.',
'ThunRTMain': 'Thsi function is used as the entry point to a visual basic executable. This indicates malware with visual basic functionality.',
'Toolhelp32ReadProcessMemory': 'This function is used to read the memory of a remote process.',
'URLDownloadToFile': 'This function is used to download a file from a web server and save it to disk. This function is popular with downloaders because it implements all the functionality of a downloader in one function call.',
'VirtualAllocEx': 'This function is a memory-allocation routine that can allocate memory in a remote process. Malware sometimes uses VirtualAllocEx as part of process injection.',
'VirtualProtectEx': 'This function is used to change the protection on a region of memory. Malware may use this function to change a read-only section of memory to an executable.',
'WideCharToMultiByte': 'This function is used to convert a Unicode string into an ASCII string.',
'WinExec': 'This function is used to execute another program.',
'WriteProcessMemory': 'This function is used to write data to a remote process. Malware uses WriteProcessMemory as part of process injection.',
'WSAStartup': 'This function is used to initialize low-level network functionality. Finding calls to WSAStartup can often be an easy way to locate the start of network related functionality.',
'Zombie_AddRef': 'This function is used to make a call to a visual basic subroutine. This indicates malware with visual basic functionality.'
}
# constant for an unknown import by ordinal
ORDINAL_DESC = 'Ordinal is decoded at runtime. To see ordinal mapping, Download the DLL and use the parse_exports() method of the PE class.'
def run(peobject):
found = []
alerts = []
# search for functionality in imports list
for dll in peobject.parse_imports():
# loop through each function in the DLL
for f in dll['functions']:
name = f['name']
# check for dll import by ordinal and try to resolve it
if f['ordinal']:
if (dll['dll'].lower() in ords2names) and (f['ordinal'] in ords2names[dll['dll'].lower()]):
name = ords2names[dll['dll'].lower()][f['ordinal']]
else:
# unknown dll with ordinal import
target = ''.join(['[', dll['dll'], '] ordinal(', hex(f['ordinal']).rstrip('L'), ') : ', ORDINAL_DESC])
if target not in found:
found.append(target)
# check for function name in targets
match = [k for k in targets if name in k]
if match:
target = ''.join(['[', dll['dll'], '] ', match[0], ' : ', targets[match[0]]])
if target not in found:
found.append(target)
# this rule generates only one alert
if found:
alerts.append({
'title': 'Suspicious Imports',
'description': 'These are imported functions by the executable that indicate functionality.',
'data': found,
'code': '',
})
return alerts
| 119.052632 | 354 | 0.780858 | 3,349 | 22,620 | 5.267244 | 0.212899 | 0.090476 | 0.066667 | 0.078571 | 0.335714 | 0.245578 | 0.195181 | 0.15924 | 0.146315 | 0.136565 | 0 | 0.005852 | 0.16145 | 22,620 | 189 | 355 | 119.68254 | 0.924135 | 0.020336 | 0 | 0.023669 | 0 | 0.491124 | 0.863117 | 0.02109 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005917 | false | 0.029586 | 0.130178 | 0 | 0.142012 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3f6f2cf82e789df4f6fa6a684b85e6cb348c35e2 | 629 | py | Python | apps/api/v1/pagination.py | asmuratbek/oobamarket | 1053976a13ea84b9aabfcbbcbcffd79549ce9538 | [
"MIT"
] | null | null | null | apps/api/v1/pagination.py | asmuratbek/oobamarket | 1053976a13ea84b9aabfcbbcbcffd79549ce9538 | [
"MIT"
] | 7 | 2020-06-05T23:36:01.000Z | 2022-01-13T01:42:07.000Z | apps/api/v1/pagination.py | asmuratbek/oobamarket | 1053976a13ea84b9aabfcbbcbcffd79549ce9538 | [
"MIT"
] | null | null | null | from rest_framework.pagination import LimitOffsetPagination, PageNumberPagination
class CategoryLimitPagination(PageNumberPagination):
page_size = 20
page_size_query_param = 'page_size'
max_page_size = 40
class ProductLimitPagination(PageNumberPagination):
page_size = 20
page_size_query_param = 'page_size'
max_page_size = 40
class ShopLimitPagination(PageNumberPagination):
page_size = 21
page_size_query_param = 'page_size'
max_page_size = 42
class ShopProductsLimitPagination(PageNumberPagination):
page_size = 24
page_size_query_param = 'page_size'
max_page_size = 42
| 24.192308 | 81 | 0.779014 | 71 | 629 | 6.492958 | 0.309859 | 0.277657 | 0.24295 | 0.156182 | 0.490239 | 0.490239 | 0.490239 | 0.490239 | 0.490239 | 0.490239 | 0 | 0.030534 | 0.166932 | 629 | 25 | 82 | 25.16 | 0.849237 | 0 | 0 | 0.588235 | 0 | 0 | 0.057234 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
58bb7d04b96141208c9caee423f5f2553e1e7354 | 390 | py | Python | ex29_half.py | youknowone/learn-python3-thw-code-ko | 3b7fccaf3eed7427e437004cfe3c4908823f5e41 | [
"MIT"
] | null | null | null | ex29_half.py | youknowone/learn-python3-thw-code-ko | 3b7fccaf3eed7427e437004cfe3c4908823f5e41 | [
"MIT"
] | null | null | null | ex29_half.py | youknowone/learn-python3-thw-code-ko | 3b7fccaf3eed7427e437004cfe3c4908823f5e41 | [
"MIT"
] | null | null | null | people = 20
cats = 30
dogs = 15
if people < cats:
print("고양이가 너무 많아요! 세상은 멸망합니다!")
if people > cats:
print("고양이가 많지 않아요! 세상은 지속됩니다!")
if people < dogs:
print("세상은 침에 젖습니다!")
if people > dogs:
print("세상은 말랐습니다!")
dogs += 5
if people >= dogs:
print("사람은 개보다 많거나 같습니다")
if people <= dogs:
print("사람은 개보다 적거나 같습니다.")
if people == dogs:
print("사람은 개입니다.")
| 13 | 36 | 0.584615 | 61 | 390 | 3.737705 | 0.42623 | 0.245614 | 0.263158 | 0.372807 | 0.684211 | 0.324561 | 0 | 0 | 0 | 0 | 0 | 0.024476 | 0.266667 | 390 | 29 | 37 | 13.448276 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0.282051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.388889 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
58f0ab77666277ac6d3ddc06e53dedb0c6d49f2b | 1,573 | py | Python | classification/tests/test_evidence_mixin.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 5 | 2021-01-14T03:34:42.000Z | 2022-03-07T15:34:18.000Z | classification/tests/test_evidence_mixin.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | 551 | 2020-10-19T00:02:38.000Z | 2022-03-30T02:18:22.000Z | classification/tests/test_evidence_mixin.py | SACGF/variantgrid | 515195e2f03a0da3a3e5f2919d8e0431babfd9c9 | [
"RSA-MD"
] | null | null | null | from classification.models import EvidenceMixin
from classification.models.evidence_mixin import VCStore
class BasicEvidence(EvidenceMixin):
def __init__(self, evidence: VCStore):
self.evidence = evidence
@property
def _evidence(self) -> VCStore:
return self.evidence
# doesn't work without Transcripts loaded now
# class EvidenceMixinTest(TestCase):
#
# @override_settings(VARIANT_ANNOTATION_TRANSCRIPT_PREFERENCES=['refseq_transcript_accession'])
# def test_get_transcript(self):
# # if transcript version is in c.hgvs use it
# be = BasicEvidence({
# SpecialEKeys.C_HGVS: "NM_020975.5(RET):c.867+48A>G",
# SpecialEKeys.REFSEQ_TRANSCRIPT_ID: "NM_020975",
# SpecialEKeys.GENOME_BUILD: "GRCh37"
# })
# self.assertEqual(be.transcript, "NM_020975.5")
#
# # if transcript version is in c.hgvs but transcript doesn't match
# # value in transcript field, use the raw transcript value
# be = BasicEvidence({
# SpecialEKeys.C_HGVS: "NM_020975.5(RET):c.867+48A>G",
# SpecialEKeys.REFSEQ_TRANSCRIPT_ID: "NM_033333",
# SpecialEKeys.GENOME_BUILD: "GRCh37"
# })
# self.assertEqual(be.transcript, "NM_033333")
#
# # if there is no transcript field, use the contents of c.hgvs
# be = BasicEvidence({
# SpecialEKeys.C_HGVS: "NM_020975.5(RET):c.867+48A>G",
# SpecialEKeys.GENOME_BUILD: "GRCh37"
# })
# self.assertEqual(be.transcript, "NM_020975.5")
| 37.452381 | 99 | 0.650985 | 180 | 1,573 | 5.511111 | 0.372222 | 0.030242 | 0.045363 | 0.084677 | 0.467742 | 0.467742 | 0.467742 | 0.41129 | 0.41129 | 0.352823 | 0 | 0.062029 | 0.241577 | 1,573 | 41 | 100 | 38.365854 | 0.769489 | 0.769867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.125 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
58fb480ae327d41355b8b8179c7dede058c21b5b | 222 | py | Python | server/config.py | nikitinivan/Crypton | 90db77f4066d763e55f55c0fb540dc99aa2495e3 | [
"MIT"
] | null | null | null | server/config.py | nikitinivan/Crypton | 90db77f4066d763e55f55c0fb540dc99aa2495e3 | [
"MIT"
] | null | null | null | server/config.py | nikitinivan/Crypton | 90db77f4066d763e55f55c0fb540dc99aa2495e3 | [
"MIT"
] | null | null | null | import os
class Configuration:
APPLICATION_DIR = os.path.dirname(os.path.realpath(__file__))
DEBUG = True
SECRET_KEY = 'thisissecretkeyforcrypton' # change it in production !!!
MONGO_DBNAME = 'cryptondb'
| 24.666667 | 74 | 0.725225 | 25 | 222 | 6.16 | 0.88 | 0.077922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18018 | 222 | 8 | 75 | 27.75 | 0.846154 | 0.121622 | 0 | 0 | 0 | 0 | 0.176166 | 0.129534 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
45202479629fa2ae422e3a2c76ead8cf08a4c08c | 2,004 | py | Python | river/compose/renamer.py | online-ml/creme | 60872844e6052b5ef20e4075aea30f9031377136 | [
"BSD-3-Clause"
] | 1,105 | 2019-01-24T15:15:30.000Z | 2020-11-10T18:27:00.000Z | river/compose/renamer.py | online-ml/creme | 60872844e6052b5ef20e4075aea30f9031377136 | [
"BSD-3-Clause"
] | 328 | 2019-01-25T13:48:43.000Z | 2020-11-11T11:41:44.000Z | river/compose/renamer.py | online-ml/creme | 60872844e6052b5ef20e4075aea30f9031377136 | [
"BSD-3-Clause"
] | 150 | 2019-01-29T19:05:21.000Z | 2020-11-11T11:50:14.000Z | from typing import Dict
from river import base
__all__ = ["Renamer", "Prefixer", "Suffixer"]
class Renamer(base.Transformer):
"""Renames features following substitution rules.
Parameters
----------
mapping
Dictionnary describing substitution rules. Keys in `mapping` that are not a feature's name are silently ignored.
Examples
--------
>>> from river import compose
>>> mapping = {'a': 'v', 'c': 'o'}
>>> x = {'a': 42, 'b': 12}
>>> compose.Renamer(mapping).transform_one(x)
{'b': 12, 'v': 42}
"""
def __init__(self, mapping: Dict[str, str]):
self.mapping = mapping
def transform_one(self, x):
for old_key, new_key in self.mapping.items():
try:
x[new_key] = x.pop(old_key)
except KeyError:
pass # Ignoring keys that are not a feature's name
return x
class Prefixer(base.Transformer):
"""Prepends a prefix on features names.
Parameters
----------
prefix
Examples
--------
>>> from river import compose
>>> x = {'a': 42, 'b': 12}
>>> compose.Prefixer('prefix_').transform_one(x)
{'prefix_a': 42, 'prefix_b': 12}
"""
def __init__(self, prefix: str):
self.prefix = prefix
def _rename(self, s: str) -> str:
return f"{self.prefix}{s}"
def transform_one(self, x):
return {self._rename(i): xi for i, xi in x.items()}
class Suffixer(base.Transformer):
"""Appends a suffix on features names.
Parameters
----------
suffix
Examples
--------
>>> from river import compose
>>> x = {'a': 42, 'b': 12}
>>> compose.Suffixer('_suffix').transform_one(x)
{'a_suffix': 42, 'b_suffix': 12}
"""
def __init__(self, suffix: str):
self.suffix = suffix
def _rename(self, s: str) -> str:
return f"{s}{self.suffix}"
def transform_one(self, x):
return {self._rename(i): xi for i, xi in x.items()}
| 21.094737 | 120 | 0.560878 | 249 | 2,004 | 4.369478 | 0.277108 | 0.066176 | 0.055147 | 0.063419 | 0.329044 | 0.283088 | 0.270221 | 0.227941 | 0.178309 | 0.178309 | 0 | 0.016552 | 0.276447 | 2,004 | 94 | 121 | 21.319149 | 0.733793 | 0.434132 | 0 | 0.259259 | 0 | 0 | 0.055894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.296296 | false | 0.037037 | 0.074074 | 0.148148 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
45303096a42f87f1631edf145b0ae0b347d69c0b | 753 | py | Python | chapter10/exercises/EG10-20 Twinkle Twinkle classes.py | munnep/begin_to_code_with_python | 3ef14d90785526b6b26d262a7627eee73791d7d0 | [
"MIT"
] | null | null | null | chapter10/exercises/EG10-20 Twinkle Twinkle classes.py | munnep/begin_to_code_with_python | 3ef14d90785526b6b26d262a7627eee73791d7d0 | [
"MIT"
] | null | null | null | chapter10/exercises/EG10-20 Twinkle Twinkle classes.py | munnep/begin_to_code_with_python | 3ef14d90785526b6b26d262a7627eee73791d7d0 | [
"MIT"
] | null | null | null | # EG10-20 Twinkle Twinkle classes
import time
import snaps
class Note:
def __init__(self, note, duration):
self.__note = note
self.__duration = duration
def play(self):
snaps.play_note(self.__note)
time.sleep(self.__duration)
tune = [Note(note=0, duration=0.4), Note(note=0, duration=0.4),
Note(note=7, duration=0.4), Note(note=7, duration=0.4),
Note(note=9, duration=0.4), Note(note=9, duration=0.4),
Note(note=7, duration=0.8), Note(note=5, duration=0.4),
Note(note=5, duration=0.4), Note(note=4, duration=0.4),
Note(note=4, duration=0.4), Note(note=2, duration=0.4),
Note(note=2, duration=0.4), Note(note=0, duration=0.8)]
for note in tune:
note.play()
| 30.12 | 63 | 0.622842 | 125 | 753 | 3.648 | 0.2 | 0.263158 | 0.263158 | 0.368421 | 0.578947 | 0.578947 | 0.578947 | 0.578947 | 0.412281 | 0.412281 | 0 | 0.077181 | 0.208499 | 753 | 24 | 64 | 31.375 | 0.687919 | 0.041169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
18a1a66ddfc12bbc493302b88cd1fbc01b59d040 | 71 | py | Python | Chapter 01/Chap01_Example1.40.py | Anancha/Programming-Techniques-using-Python | e80c329d2a27383909d358741a5cab03cb22fd8b | [
"MIT"
] | null | null | null | Chapter 01/Chap01_Example1.40.py | Anancha/Programming-Techniques-using-Python | e80c329d2a27383909d358741a5cab03cb22fd8b | [
"MIT"
] | null | null | null | Chapter 01/Chap01_Example1.40.py | Anancha/Programming-Techniques-using-Python | e80c329d2a27383909d358741a5cab03cb22fd8b | [
"MIT"
] | null | null | null | #backslash and new line ignored
print("one\
two\
three")
| 14.2 | 31 | 0.591549 | 9 | 71 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.309859 | 71 | 4 | 32 | 17.75 | 0.857143 | 0.422535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
18b95560e12ae1f8ecbf164d50ad646b8d18c3b3 | 126 | py | Python | contacts/urls.py | HaraDev001/RealEstate-Backend | db2ae8d143bd15fbb49432ae8b14fd3bf8e6dd1c | [
"MIT"
] | 2 | 2021-05-17T18:02:36.000Z | 2021-05-17T18:02:44.000Z | contacts/urls.py | HaraDev001/RealEstate-Backend | db2ae8d143bd15fbb49432ae8b14fd3bf8e6dd1c | [
"MIT"
] | null | null | null | contacts/urls.py | HaraDev001/RealEstate-Backend | db2ae8d143bd15fbb49432ae8b14fd3bf8e6dd1c | [
"MIT"
] | null | null | null | from django.urls import path
from .views import ContactCreateView
urlpatterns = [
path('',ContactCreateView.as_view()),
] | 21 | 41 | 0.753968 | 14 | 126 | 6.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134921 | 126 | 6 | 42 | 21 | 0.862385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
18bb4104d3cd6b1e910557e18aee65ea9222b8ce | 1,124 | py | Python | internal/handlers/lebanon.py | fillingthemoon/cartogram-web | 58b645bca0c22b9bccdb2a5a8213a5a24a7e5958 | [
"MIT"
] | null | null | null | internal/handlers/lebanon.py | fillingthemoon/cartogram-web | 58b645bca0c22b9bccdb2a5a8213a5a24a7e5958 | [
"MIT"
] | 20 | 2019-10-20T11:27:38.000Z | 2022-03-12T00:28:17.000Z | internal/handlers/lebanon.py | fillingthemoon/cartogram-web | 58b645bca0c22b9bccdb2a5a8213a5a24a7e5958 | [
"MIT"
] | 16 | 2019-08-22T04:49:44.000Z | 2021-06-09T04:44:57.000Z | import settings
import handlers.base_handler
import csv
class CartogramHandler(handlers.base_handler.BaseCartogramHandler):
def get_name(self):
return "Lebanon"
def get_gen_file(self):
return "{}/lbn_processedmap.json".format(settings.CARTOGRAM_DATA_DIR)
def validate_values(self, values):
if len(values) != 8:
return False
for v in values:
if type(v) != float:
return False
return True
def gen_area_data(self, values):
return """1 {} Akkar
2 {} Baalbak-Hermel
3 {} Beirut
4 {} Beqaa
5 {} Mount Lebanon
6 {} Nabatieh
7 {} North
8 {} South""".format(*values)
def expect_geojson_output(self):
return True
def csv_to_area_string_and_colors(self, csvfile):
return self.order_by_example(csv.reader(csvfile), "Governorate", 0, 1, 2, 3, ["Akkar","Baalbak-Hermel","Beirut","Beqaa","Mount Lebanon","Nabatieh","North","South"], [0.0 for i in range(0,8)], {"Akkar":"1","Baalbak-Hermel":"2","Beirut":"3","Beqaa":"4","Mount Lebanon":"5","Nabatieh":"6","North":"7","South":"8"})
| 28.1 | 319 | 0.623665 | 149 | 1,124 | 4.57047 | 0.463087 | 0.044053 | 0.0558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028539 | 0.220641 | 1,124 | 39 | 320 | 28.820513 | 0.748858 | 0 | 0 | 0.142857 | 0 | 0 | 0.24911 | 0.021352 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.107143 | 0.178571 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
18d8e4a9db3824bc1bf6d57f22782a4ffcc36549 | 93 | py | Python | phr/dnireniec/apps.py | richardqa/django-ex | e5b8585f28a97477150ac5daf5e55c74b70d87da | [
"CC0-1.0"
] | null | null | null | phr/dnireniec/apps.py | richardqa/django-ex | e5b8585f28a97477150ac5daf5e55c74b70d87da | [
"CC0-1.0"
] | null | null | null | phr/dnireniec/apps.py | richardqa/django-ex | e5b8585f28a97477150ac5daf5e55c74b70d87da | [
"CC0-1.0"
] | null | null | null | from django.apps import AppConfig
class DnireniecConfig(AppConfig):
name = 'dnireniec'
| 15.5 | 33 | 0.763441 | 10 | 93 | 7.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 93 | 5 | 34 | 18.6 | 0.910256 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
18dc89f687d6010723363d00fb4079f119453e21 | 290 | py | Python | tests/jdi_uitests_webtests/main/page_objects/w3c_site/w3c_site.py | jdi-testing/jdi-python | 7c0607b97d4d44b27ea8f532d47c68b8dd00e6f7 | [
"MIT"
] | 5 | 2020-02-14T10:32:01.000Z | 2021-07-22T08:20:28.000Z | tests/jdi_uitests_webtests/main/page_objects/w3c_site/w3c_site.py | jdi-testing/jdi-python | 7c0607b97d4d44b27ea8f532d47c68b8dd00e6f7 | [
"MIT"
] | 54 | 2018-07-27T14:07:33.000Z | 2021-11-08T09:24:16.000Z | tests/jdi_uitests_webtests/main/page_objects/w3c_site/w3c_site.py | jdi-testing/jdi-python | 7c0607b97d4d44b27ea8f532d47c68b8dd00e6f7 | [
"MIT"
] | 1 | 2021-01-20T14:31:52.000Z | 2021-01-20T14:31:52.000Z | from JDI.web.selenium.elements.composite.web_site import WebSite
from tests.jdi_uitests_webtests.main.page_objects.w3c_site.frame_page import FramePage
class W3cSite(WebSite):
domain = "https://www.w3schools.com"
frame_page = FramePage(url="/tags/tag_button.asp", domain=domain)
| 32.222222 | 86 | 0.793103 | 42 | 290 | 5.285714 | 0.714286 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01145 | 0.096552 | 290 | 8 | 87 | 36.25 | 0.835878 | 0 | 0 | 0 | 0 | 0 | 0.155172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
18dcab3c94de533e1fad537525409735b1a45b43 | 22,917 | py | Python | deepx/backend/tensorflow.py | sharadmv/deepx | 07470e7a579a63427de1d5ff90b9fd00d3f54b61 | [
"MIT"
] | 74 | 2015-11-13T02:26:37.000Z | 2021-07-29T11:00:45.000Z | deepx/backend/tensorflow.py | sharadmv/deepx | 07470e7a579a63427de1d5ff90b9fd00d3f54b61 | [
"MIT"
] | 21 | 2015-12-12T20:33:55.000Z | 2019-04-03T02:49:42.000Z | deepx/backend/tensorflow.py | sharadmv/deepx | 07470e7a579a63427de1d5ff90b9fd00d3f54b61 | [
"MIT"
] | 19 | 2015-11-23T10:07:01.000Z | 2021-08-30T17:06:00.000Z | import copy
import logging
import numpy as np
import six
import tensorflow as tf
from functools import wraps
from contextlib import contextmanager
from .backend_base import BackendBase, FunctionBase, DeviceDecorator
try:
from tensorflow.contrib.distributions import fill_triangular
except:
print("Cannot find fill_triangular")
class TensorflowFunction(FunctionBase):
def __init__(self, *args, **kwargs):
super(TensorflowFunction, self).__init__(*args, **kwargs)
with tf.control_dependencies(self.outputs):
self.updates = [tf.assign(k, v) for k, v in self.updates]
def __call__(self, *inputs):
feed_dict = self.feed_dict(*inputs)
result = self.session.get_current_session().run(self.outputs + self.updates, feed_dict=feed_dict)
if len(self.outputs) == 1:
return result[0]
return result[:len(self.outputs)]
@six.add_metaclass(DeviceDecorator)
class TensorflowBackend(BackendBase):
def __init__(self, **kwargs):
super(TensorflowBackend, self).__init__(**kwargs)
self.core = tf
self._sessions = []
self.set_default_device(self.gpu() if tf.test.is_gpu_available() else self.cpu())
# General purpose methods
@classmethod
def use_device(cls, method):
@wraps(method)
def func(self, *args, **kwargs):
with tf.device(self.get_current_device()):
result = method(self, *args, **kwargs)
return result
return func
def enable_eager(self):
tf.enable_eager_execution()
def cpu(self, id=0):
return 'cpu/:%u' % id
def gpu(self, id=0):
return 'gpu/:%u' % id
@property
def int32(self):
return tf.int32
@property
def float32(self):
return tf.float32
def _placeholder(self, dtype=None, shape=None, name=None):
with self._device(self.get_current_device()):
return tf.placeholder(dtype, shape=shape, name=name)
def _variable(self, initial_value=None, trainable=True, name=None):
with self._device(self.get_current_device()):
return tf.Variable(initial_value=initial_value, trainable=trainable, name=name)
def _device(self, name):
return tf.device(name)
def create_session(self, graph=None, **kwargs):
allow_growth = kwargs.pop('allow_growth', False)
config_proto = tf.ConfigProto(**kwargs)
config_proto.gpu_options.allow_growth = allow_growth
sess = tf.Session(graph=graph, config=config_proto)
self._initialize(sess)
return sess
@contextmanager
def session(self, **kwargs):
with self.create_session(**kwargs) as sess:
self._sessions.append(sess)
self._initialize(sess)
yield sess
self._sessions.pop()
def interactive_session(self, graph=None, **kwargs):
config_proto = tf.ConfigProto(**kwargs)
sess = tf.InteractiveSession(config=config_proto, graph=graph)
self._initialize(sess)
return sess
def get_current_session(self):
if len(self._sessions) == 0:
raise Exception('No current session')
return self._sessions[-1]
def _initialize(self, sess):
sess.run(tf.local_variables_initializer())
sess.run(tf.global_variables_initializer())
# Unified interface
def cast(self, x, dtype):
return tf.cast(x, dtype)
def dtype(self, x):
return x.dtype
def shape(self, x):
return tf.shape(x)
def rank(self, x):
return tf.rank(x)
def abs(self, x):
return tf.abs(x)
def set_value(self, x, value):
tf.assign(x, np.asarray(value)).op.run(session=self.get_current_session())
def zeros(self, shape, dtype=None, name=None):
dtype = dtype or self.floatx()
return tf.zeros(shape, dtype=dtype, name=name)
def zeros_like(self, x, dtype=None, name=None):
return tf.zeros_like(x, dtype=dtype, name=name)
def ones(self, shape, dtype=None, name=None):
dtype = dtype or self.floatx()
return tf.ones(shape, dtype=dtype, name=name)
def ones_like(self, x, dtype=None, name=None):
return tf.ones_like(x, dtype=dtype, name=name)
def random_normal(self, shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
dtype = dtype or self.floatx()
return tf.random_normal(shape, mean=mean, stddev=stddev, dtype=dtype, seed=seed)
def random_truncated_normal(self, shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
dtype = dtype or self.floatx()
return tf.truncated_normal(shape, mean=mean, stddev=stddev, dtype=dtype, seed=seed)
def random_uniform(self, shape, minval=0, maxval=None, dtype=None, seed=None):
dtype = dtype or self.floatx()
return tf.random_uniform(shape, minval=minval, maxval=maxval, dtype=dtype, seed=seed)
def random_binomial(self, shape, p=0.5, dtype=None):
dtype = dtype or self.floatx()
return tf.where(tf.random_uniform(shape, dtype=dtype) <= p,
tf.ones(shape, dtype=dtype),
tf.zeros(shape, dtype=dtype))
def random_gamma(self, shape, alpha, beta=None):
return tf.random_gamma(shape, alpha, beta=beta)
pass
def tanh(self, x, name=None):
return tf.tanh(x, name=name)
def sigmoid(self, x, name=None):
return tf.sigmoid(x, name=name)
def relu(self, x, alpha=0., name=None):
return tf.nn.relu(x, name=name)
def softmax(self, x, T=1.0):
return tf.nn.softmax(x)
def softplus(self, x):
return tf.nn.softplus(x)
def dropout(self, x, p, seed=None):
retain_prob = 1. - p
if seed is None:
seed = np.random.randint(10e6)
return tf.nn.dropout(x * 1., retain_prob, seed=seed)
def conv2d(self, x, kernel, strides=(1, 1), border_mode='same',
image_shape=None, filter_shape=None):
'''
Run on cuDNN if available.
border_mode: string, "same" or "valid".
dim_ordering: whether to use Theano or TensorFlow dimension ordering
in inputs/kernels/ouputs.
'''
if border_mode == 'same':
padding = 'SAME'
elif border_mode == 'valid':
padding = 'VALID'
else:
raise Exception('Invalid border mode: ' + str(border_mode))
# strides = strides# + (1,)
if self.floatx() == 'float64':
x = tf.cast(x, 'float32')
kernel = tf.cast(kernel, 'float32')
x = tf.nn.convolution(input=x, filter=kernel, strides=strides, padding=padding,
data_format='NHWC')
if self.floatx() == 'float64':
x = tf.cast(x, 'float64')
return x
def conv2d_transpose(self, x, kernel, dim_out, strides=(1, 1), border_mode='same'):
if border_mode == 'same':
padding = 'SAME'
elif border_mode == 'valid':
padding = 'VALID'
else:
raise Exception('Invalid border mode: ' + str(border_mode))
output_shape = [self.shape(x)[0]] + list(dim_out)
strides = (1,) + strides + (1,)
if self.floatx() == 'float64':
x = tf.cast(x, 'float32')
kernel = tf.cast(kernel, 'float32')
x = tf.nn.conv2d_transpose(x, kernel, output_shape, strides, padding=padding)
if self.floatx() == 'float64':
x = tf.cast(x, 'float64')
return x
def pool2d(self, x, pool_size, strides=(1, 1),
border_mode='valid', pool_mode='max'):
'''
pool_size: tuple of 2 integers.
strides: tuple of 2 integers.
border_mode: one of "valid", "same".
dim_ordering: one of "th", "tf".
'''
if border_mode == 'same':
padding = 'SAME'
elif border_mode == 'valid':
padding = 'VALID'
else:
raise Exception('Invalid border mode: ' + str(border_mode))
strides = (1,) + strides + (1,)
pool_size = (1,) + pool_size + (1,)
if self.floatx() == 'float64':
x = tf.cast(x, 'float32')
if pool_mode == 'max':
x = tf.nn.max_pool(x, pool_size, strides, padding=padding)
elif pool_mode == 'avg':
x = tf.nn.avg_pool(x, pool_size, strides, padding=padding)
else:
raise Exception('Invalid pooling mode: ' + str(pool_mode))
if self.floatx() == 'float64':
x = tf.cast(x, 'float64')
return x
def flatten(self, x, leading=1):
leading_dim = self.shape(x)[:leading]
new_shape = tf.concat([leading_dim, [-1]], 0)
return tf.reshape(x, new_shape)
def split(self, x, num_splits, axis=None):
axis = axis % len(x.get_shape())
return tf.split(x, num_splits, axis=axis)
def reshape(self, x, shape):
return tf.reshape(x, shape)
def sum(self, x, axis=None, keepdims=False):
if x.dtype.base_dtype == tf.bool:
x = tf.cast(x, self.floatx())
return tf.reduce_sum(x, axis=axis, keepdims=keepdims)
def prod(self, x, axis=None, keepdims=False):
return tf.reduce_prod(x, axis=axis, keepdims=keepdims)
def mean(self, x, axis=None, keepdims=False):
if axis is not None and axis < 0:
axis = axis % len(x.get_shape())
if x.dtype.base_dtype == tf.bool:
x = tf.cast(x, self.floatx())
return tf.reduce_mean(x, axis=axis, keepdims=keepdims)
def batch_norm(self, x, beta, gamma):
mean, variance = tf.nn.moments(x, [0])
normed = tf.nn.batch_normalization(tf.identity(x), mean, variance, beta, gamma, self.epsilon())
return normed
def log(self, x):
return tf.log(x)
def log1p(self, x):
return tf.log1p(x)
def exp(self, x):
return tf.exp(x)
def pow(self, x, a):
return tf.pow(x, a)
def mul(self, x, y):
return tf.multiply(x, y)
def sqrt(self, x):
x = tf.clip_by_value(x,
tf.cast(0., dtype=self.floatx()),
tf.cast(np.inf, dtype=self.floatx()))
return tf.sqrt(x)
def categorical_crossentropy(self, output, target, from_logits=False, axis=-1):
if not from_logits:
# scale preds so that the class probas of each sample sum to 1
output = output / tf.reduce_sum(output, axis, True)
# manual computation of crossentropy
output = tf.clip_by_value(output, self.epsilon(), 1. - self.epsilon())
return -tf.reduce_sum(target * tf.log(output), axis)
else:
return tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels=target)
def binary_crossentropy(self, output, target, from_logits=False):
if from_logits:
return tf.nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
else:
raise NotImplementedError
def concatenate(self, tensors, axis=-1):
return tf.concat(tensors, axis=axis)
def sort(self, tensor):
values, indices = tf.nn.top_k(-tensor, k=tf.shape(tensor)[0])
return -values, indices
def argmin(self, tensor, axis=0):
return tf.argmin(tensor, axis=axis)
def map(self, function, input):
return tf.map_fn(function, input)
def rnn(self, step_function, input, initial_states, **kwargs):
num_dims = self.rank(input)
perm = self.concat([[1, 0], self.range(2, num_dims)])
input = self.transpose(input, perm)
def step(state, input_):
output, state = step_function(input_, state, **kwargs)
return state
result = tf.scan(step, input, initial_states)[0]
return self.transpose(result, perm)
def while_loop(self, condition, body, loop_vars, **kwargs):
return tf.while_loop(condition, body, loop_vars)
def scan(self, fn, elems, initializer=None):
return tf.scan(fn, elems, initializer=initializer, back_prop=True)
def logdet(self, A, **kwargs):
A = (A + self.matrix_transpose(A)) / 2.
term = tf.log(tf.matrix_diag_part(self.cholesky(A, **kwargs)))
return 2 * tf.reduce_sum(term, -1)
def einsum(self, subscripts, *operands):
return tf.einsum(subscripts, *operands)
def cholesky(self, A, lower=True, warn=True, correct=False):
assert lower is True
# Gradient through py_func adapted from https://gist.github.com/harpone/3453185b41d8d985356cbe5e57d67342
def py_func(func, inp, Tout, stateful=True, name=None, grad=None):
rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8))
tf.RegisterGradient(rnd_name)(grad)
g = tf.get_default_graph()
with g.gradient_override_map({'PyFunc': rnd_name, 'PyFuncStateless': rnd_name}):
return tf.py_func(func, inp, Tout, stateful=stateful, name=name)
def correction(A):
A_new, del_ = A.copy(), 1e-4
while True:
try:
np.linalg.cholesky(A_new)
break
except np.linalg.linalg.LinAlgError:
if warn:
logging.warn('[Cholesky] singular matrix, adding diagonal {}'.format(del_))
A_new = A + del_ * np.eye(A.shape[-1]).astype(self.floatx())
del_ *= 2
return A_new
def _correction_grad(op, grad):
A = op.inputs[0]
return grad
if correct:
shape = A.get_shape()
A = py_func(correction, [A], A.dtype, grad=_correction_grad)
A.set_shape(shape)
return tf.cholesky(A)
# Tensorflow interface
def placeholder(self, dtype, shape=None, name=None):
return self._placeholder(dtype=dtype, shape=shape, name=name)
def variable(self, initial_value=None, trainable=True, name=None):
return self._variable(initial_value=initial_value, trainable=trainable, name=name)
def assign(self, a, b):
return tf.assign(a, b)
def to_float(self, x):
return tf.cast(x, self.floatx())
def constant(self, value, dtype=None, shape=None):
return tf.constant(value, dtype=dtype, shape=shape)
def get_shape(self, x):
return [a.value for a in tf.convert_to_tensor(x).get_shape()]
def get_value(self, variable):
return self.get_current_session().run(variable)
def concat(self, values, axis=-1):
return tf.concat(values, axis=axis)
def gather(self, params, indices):
return tf.gather(params, indices)
def gather_nd(self, params, indices):
return tf.gather_nd(params, indices)
def equal(self, x, y):
return tf.equal(x, y)
def logical_and(self, x, y):
return tf.logical_and(x, y)
def matmul(self, a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None):
return tf.matmul(a, b, transpose_a=transpose_a, transpose_b=transpose_b, a_is_sparse=a_is_sparse, name=name)
def trace(self, a):
return tf.trace(a)
def transpose(self, a, perm=None):
return tf.transpose(a, perm=perm)
def matrix_transpose(self, a):
return tf.matrix_transpose(a)
def matrix_diag(self, a):
return tf.matrix_diag(a)
def matrix_diag_part(self, a):
return tf.matrix_diag_part(a)
def set_diag(self, input, diagonal):
return tf.linalg.set_diag(input, diagonal)
def band_part(self, input, num_lower, num_upper):
return tf.linalg.band_part(input, num_lower, num_upper)
def vec(self, A):
A = self.matrix_transpose(A)
leading_dim = self.shape(A)[:-2]
return self.reshape(A, self.concat([
leading_dim,
[-1]
], 0))
def unvec(self, v, m, n):
leading_dim = self.shape(v)[:-1]
return self.matrix_transpose(self.reshape(v, self.concat([
leading_dim,
[n, m]
], 0)))
def kronecker(self, A, B):
C = (A[..., None, None] * B[..., None, None, :, :])
blocks = [
tf.unstack(a, axis=-3 % len(a.shape)) for a in
tf.unstack(C, axis=-4 % len(C.shape))
]
return tf.concat([
tf.concat(a, -1) for a in blocks
], -2)
def block_sum(self, X, m, n):
leading_dim = self.shape(X)[:-2]
block_sum = self.zeros(self.concat([leading_dim, [m, m]], 0))
for i in range(n):
block_sum += X[..., i*m:(i+1)*m, i*m:(i+1)*m]
return block_sum
def block_trace(self, X, m, n):
blocks = []
for i in range(n):
blocks.append([])
for j in range(n):
block = self.trace(X[..., i*m:(i+1)*m, j*m:(j+1)*m])
blocks[-1].append(block)
return self.pack([
self.pack([
b for b in block
])
for block in blocks
])
def kronecker_vec(self, X, m, n):
leading_dim = tf.shape(X)[:-2]
blocks = []
for i in range(n):
blocks.append([])
for j in range(m):
idx = i * m + j
block = tf.matrix_transpose(tf.reshape(X[..., idx, :], tf.concat([leading_dim, [n, m]], 0)))
blocks[-1].append(block)
return tf.concat([tf.concat(b, -2) for b in blocks], -1)
def lower_triangular(self, a):
return fill_triangular(a)
def matrix_inverse(self, a):
return tf.matrix_inverse(a)
def expand_dims(self, x, dim=-1):
return tf.expand_dims(x, dim)
def tile(self, input, multiples):
return tf.tile(input, multiples)
def gradients(self, loss, variables):
return tf.gradients(loss, variables)
def square(self, x):
return tf.square(x)
def clip_by_value(self, x, low, high):
return tf.clip_by_value(x, low, high)
def stack(self, values, axis=0, name='stack'):
return tf.stack(values, axis=axis, name=name)
def unstack(self, values, num=None, axis=0, name='unstack'):
return tf.unstack(values, num=num, axis=axis, name=name)
def pack(self, *args, **kwargs):
return self.stack(*args, **kwargs)
def unpack(self, *args, **kwargs):
return self.unstack(*args, **kwargs)
def reduce_max(self, x, axis=None, keepdims=False):
return tf.reduce_max(x, axis=axis, keepdims=keepdims)
def reduce_logsumexp(self, x, axis=None, keepdims=False):
return tf.reduce_logsumexp(x, axis=axis, keepdims=keepdims)
def matrix_solve(self, matrix, rhs, adjoint=None):
return tf.matrix_solve(matrix, rhs, adjoint=adjoint)
# Theano interface
def dim(self, x):
return len(x.get_shape())
def scalar(self, name=None, dtype=None, shape=[]):
dtype = dtype or self.floatx()
return self._placeholder(dtype=dtype, shape=shape, name=name)
def vector(self, name=None, dtype=None, shape=[None]):
dtype = dtype or self.floatx()
return self._placeholder(dtype=dtype, shape=shape, name=name)
def matrix(self, name=None, dtype=None, shape=[None, None]):
dtype = dtype or self.floatx()
return self._placeholder(dtype=dtype, shape=shape, name=name)
def tensor3(self, name=None, dtype=None, shape=[None, None, None]):
dtype = dtype or self.floatx()
return self._placeholder(dtype=dtype, shape=shape, name=name)
def tensor4(self, name=None, dtype=None, shape=[None, None, None, None]):
dtype = dtype or self.floatx()
return self._placeholder(dtype=dtype, shape=shape, name=name)
def shared(self, value, name=None):
return self._variable(initial_value=value, name=name)
def arange(self, start, stop=None, step=None):
return self.range(start, stop=stop, step=step)
def sparse_dot(self, x, y):
return tf.sparse_tensor_dense_matmul(x, y)
def dot(self, x, y):
if len(x.get_shape()) != len(y.get_shape()):
len_y = len(y.get_shape())
new_y_shape = tf.concat([tf.shape(x)[:-len_y], tf.shape(y)], 0)
y = tf.broadcast_to(y, new_y_shape)
return tf.matmul(x, y)
def outer(self, x, y):
if len(x.get_shape()) == 0:
return x * y
return x[...,:,None] * y[...,None,:]
def eye(self, d, batch_shape=None):
return tf.eye(d, batch_shape=batch_shape)
def function(self, inputs, outputs, updates=[]):
return TensorflowFunction(self, inputs, outputs, updates)
def grad(self, loss, variables):
return tf.gradients(loss, variables)
def sqr(self, x):
return tf.square(x)
def argmax(self, x, axis=None):
return tf.argmax(x, axis=axis)
def max(self, x, axis=None, keepdims=False):
return tf.reduce_max(x, axis=axis, keepdims=keepdims)
def logsumexp(self, x, axis=None, keepdims=False):
return tf.reduce_logsumexp(x, axis=axis, keepdims=keepdims)
def switch(self, condition, then_expression, else_expression):
'''Switches between two operations depending on a scalar value (int or bool).
Note that both `then_expression` and `else_expression`
should be symbolic tensors of the *same shape*.
# Arguments
condition: scalar tensor.
then_expression: TensorFlow operation.
else_expression: TensorFlow operation.
'''
return tf.where(condition, then_expression, else_expression)
def alloc(self, value, shape, unbroadcast=None, dtype=None):
dtype = dtype or self.floatx()
vals = tf.fill(tf.stack(shape), np.array(value).astype(dtype))
new_shape = []
for s in shape:
if isinstance(s, tf.Tensor):
new_shape.append(None)
else:
new_shape.append(s)
vals.set_shape(new_shape)
return vals
def range(self, start, limit=None, delta=1):
if limit is None:
return tf.range(start, delta=delta)
return tf.range(start, limit, delta=delta)
def solve(self, a, b):
return tf.matrix_solve(a, b)
def one_hot(self, indices, depth):
return tf.one_hot(indices, depth)
# Science methods
def gammaln(self, x):
return tf.lgamma(x)
def multigammaln(self, a, p):
p = self.to_float(p)
p_ = self.cast(p, 'int32')
a = a[..., None]
i = self.to_float(self.range(1, p_ + 1))
term1 = p * (p - 1) / 4. * self.log(np.pi)
term2 = self.gammaln(a - (i - 1) / 2.)
return term1 + self.sum(term2, axis=-1)
def digamma(self, a):
return tf.digamma(a)
| 33.455474 | 116 | 0.595322 | 3,087 | 22,917 | 4.307094 | 0.140266 | 0.054753 | 0.017374 | 0.01444 | 0.349203 | 0.286628 | 0.238418 | 0.20713 | 0.199609 | 0.179753 | 0 | 0.010723 | 0.275647 | 22,917 | 684 | 117 | 33.504386 | 0.790241 | 0.040189 | 0 | 0.198795 | 0 | 0 | 0.019606 | 0 | 0 | 0 | 0 | 0 | 0.002008 | 1 | 0.2751 | false | 0.002008 | 0.018072 | 0.158635 | 0.568273 | 0.002008 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
18dd7f23d5115fd8f4284ee064ed94347d9523f8 | 497 | py | Python | utils/Formatting.py | levindoneto/lmGen | ffe2150ebff577135efa3d65a845dd3b806a94ed | [
"MIT"
] | 5 | 2018-11-17T17:16:24.000Z | 2019-10-17T15:16:37.000Z | utils/Formatting.py | levindoneto/lanGen | ffe2150ebff577135efa3d65a845dd3b806a94ed | [
"MIT"
] | 6 | 2018-02-06T23:05:29.000Z | 2019-10-14T02:23:38.000Z | utils/Formatting.py | levindoneto/lmGen | ffe2150ebff577135efa3d65a845dd3b806a94ed | [
"MIT"
] | 4 | 2018-10-29T06:37:58.000Z | 2019-10-06T13:51:18.000Z | import re
''' Function for Formatting n-grams.
@Parameters: Tuple: n-gram to be formatted.
@Return: String: formatted gram.
'''
def formatGram(ngram):
return re.sub("[(',)]", '', str(ngram))
''' Function for Formatting sentences.
@Parameters: Sentence: unformatted sentence.
@Return: String: formatted sentence.
'''
def formatSentence(sentence):
sentence = list(sentence)
sentence[0] = sentence[0].upper()
sentence = "".join(sentence)
return sentence + '.\n'
| 26.157895 | 48 | 0.661972 | 55 | 497 | 5.981818 | 0.509091 | 0.066869 | 0.12766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004926 | 0.183099 | 497 | 18 | 49 | 27.611111 | 0.805419 | 0 | 0 | 0 | 0 | 0 | 0.0375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.125 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
18ebf74aba4efdef03b71cc4501701981953cbd1 | 3,049 | py | Python | experiment_wrapper/__init__.py | stonkens/experiment_wrapper | 78b02a09d412097834bc81bba4452db1738b99da | [
"MIT"
] | 2 | 2022-03-24T22:31:20.000Z | 2022-03-25T03:26:01.000Z | experiment_wrapper/__init__.py | stonkens/experiment_wrapper | 78b02a09d412097834bc81bba4452db1738b99da | [
"MIT"
] | null | null | null | experiment_wrapper/__init__.py | stonkens/experiment_wrapper | 78b02a09d412097834bc81bba4452db1738b99da | [
"MIT"
] | null | null | null | from typing import Any, Dict, List
class Dynamics:
"""Provides a template for the functionality required from a dynamics class to interface with the experiment
wrapper functionality.
A dynamics class must implement the following methods:
- n_dims: returns the number of dimensions of the state space
- control_dims: returns the number of dimensions of the control space
- dt: returns the time step of the dynamics
- step: takes in the current state, control and time and returns the next state"""
STATES: List[str]
CONTROLS: List[str]
def __init__(self):
self._n_dims: int
self._control_dims: int
self._dt: float
raise RuntimeError("Dynamics is a template class")
@property
def n_dims(self) -> int:
return self._n_dims
@property
def control_dims(self) -> int:
return self._control_dims
@property
def dt(self) -> float:
return self._dt
def step(self, x: Any, u: Any, t: float) -> Any:
pass
class Controller:
"""Provides a template for the functionality required from a controller class to interface with the experiment
wrappper functionality.
A controller class must implement the following methods:
- __call__: takes in the current state and time and returns the control (note: a function object can be used, e.g.:
def nominal_policy(x, t):
return L @ x
with L the LQR controller matrix"""
def __init__(self):
raise RuntimeError("Controller is a template class")
def __call__(self, x: Any, t: float) -> Any:
pass
class ExtendedController(Controller):
"""Provides a template for functionality that is optional called within the experiment wrapper functionality.
A controller class (in addition to being callable) can also implement the following methods:
- controller_dt: returns the time step of the controller
- save_info: takes in the current state, control and time and returns a dictionary of information to be saved for
all measurements
- save_measurements: takes in the current state, control and time and returns a dictionary of additional
measurements to be saved
- reset: takes in the current state and resets the controller to an initial state
"""
def __init__(self):
self._controller_dt: float
raise RuntimeError("ExtendedController is a template class")
@property
def controller_dt(self) -> float:
return self._controller_dt
def save_info(self, x: Any, u: Any, t: float) -> Dict[str, Any]:
return {}
def save_measurements(self, x: Any, u: Any, t: float) -> Dict[str, Any]:
return {}
def reset(self, x: Any) -> None:
pass
from experiment_wrapper.experiment import Experiment, ScenarioList, Controllers
from experiment_wrapper.rollout_trajectory import (
RolloutTrajectory,
TimeSeriesExperiment,
StateSpaceExperiment,
)
from experiment_wrapper.experiment_suite import ExperimentSuite
__version__ = "1.0.1"
| 32.094737 | 119 | 0.700886 | 409 | 3,049 | 5.09291 | 0.264059 | 0.025924 | 0.024004 | 0.040807 | 0.457993 | 0.362458 | 0.229477 | 0.196831 | 0.161306 | 0.114258 | 0 | 0.001283 | 0.232863 | 3,049 | 94 | 120 | 32.43617 | 0.889269 | 0.46671 | 0 | 0.26087 | 0 | 0 | 0.066056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0.065217 | 0.086957 | 0.130435 | 0.586957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 3 |
18f342f2a9acba64d1ea5575f081da8b2ad4064d | 281 | py | Python | nautobot_secrets_providers/urls.py | jifox/nautobot-plugin-secrets-providers | 4d6ca51d0c78b4785f78909b04cf7c7b33c02e5d | [
"Apache-2.0"
] | 6 | 2021-12-22T21:26:12.000Z | 2022-02-16T10:00:04.000Z | nautobot_secrets_providers/urls.py | jifox/nautobot-plugin-secrets-providers | 4d6ca51d0c78b4785f78909b04cf7c7b33c02e5d | [
"Apache-2.0"
] | 9 | 2021-12-14T13:43:13.000Z | 2022-03-29T18:49:55.000Z | nautobot_secrets_providers/urls.py | jifox/nautobot-plugin-secrets-providers | 4d6ca51d0c78b4785f78909b04cf7c7b33c02e5d | [
"Apache-2.0"
] | 2 | 2022-02-04T19:11:09.000Z | 2022-03-22T16:23:31.000Z | """Django urlpatterns declaration for nautobot_secrets_providers plugin."""
from django.urls import path
from nautobot_secrets_providers import views
app_name = "nautobot_secrets_providers"
urlpatterns = [
path("", views.SecretsProvidersHomeView.as_view(), name="home"),
]
| 23.416667 | 75 | 0.786477 | 32 | 281 | 6.65625 | 0.59375 | 0.211268 | 0.338028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113879 | 281 | 11 | 76 | 25.545455 | 0.855422 | 0.245552 | 0 | 0 | 0 | 0 | 0.145631 | 0.126214 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
7a00d530de18db23fd30cafb2ab4bd712d82beb0 | 379 | py | Python | app/main/routes.py | theambidextrous/digitalemployeeapp | 2c8b593a590621a34c1fa033a720f1e412c76b96 | [
"MIT"
] | null | null | null | app/main/routes.py | theambidextrous/digitalemployeeapp | 2c8b593a590621a34c1fa033a720f1e412c76b96 | [
"MIT"
] | null | null | null | app/main/routes.py | theambidextrous/digitalemployeeapp | 2c8b593a590621a34c1fa033a720f1e412c76b96 | [
"MIT"
] | null | null | null | from flask import Blueprint, jsonify, request, redirect, abort, url_for, render_template
main = Blueprint('main', __name__)
# routes
@main.route('/', methods = ['GET'])
def Abort():
return redirect(url_for('main.index'))
# abort(403)
@main.route('/default.tpl', methods = ['GET'])
def index():
title = 'DE App'
return render_template('dflt.html', title = title)
| 29.153846 | 88 | 0.672823 | 49 | 379 | 5.040816 | 0.591837 | 0.048583 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009346 | 0.153034 | 379 | 12 | 89 | 31.583333 | 0.760125 | 0.044855 | 0 | 0 | 0 | 0 | 0.133705 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0.111111 | 0.555556 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7a042b77715a588fe196553691f390b7d45b469f | 314 | py | Python | arm_control/src/orientation.py | ALxander19/zobov_arm | 8b5b322b53a7a0d9c91fcbc720473a2a6e6f5826 | [
"BSD-2-Clause"
] | null | null | null | arm_control/src/orientation.py | ALxander19/zobov_arm | 8b5b322b53a7a0d9c91fcbc720473a2a6e6f5826 | [
"BSD-2-Clause"
] | null | null | null | arm_control/src/orientation.py | ALxander19/zobov_arm | 8b5b322b53a7a0d9c91fcbc720473a2a6e6f5826 | [
"BSD-2-Clause"
] | null | null | null | # tf.transformations alternative is not yet available in tf2
from tf.transformations import quaternion_from_euler
if __name__ == '__main__':
# RPY to convert: 90deg, 0, -90deg
q = quaternion_from_euler(1.5707, 0, -1.5707)
print "The quaternion representation is %s %s %s %s." % (q[0], q[1], q[2], q[3])
| 31.4 | 82 | 0.694268 | 51 | 314 | 4.039216 | 0.588235 | 0.029126 | 0.184466 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080769 | 0.171975 | 314 | 9 | 83 | 34.888889 | 0.711538 | 0.289809 | 0 | 0 | 0 | 0 | 0.240909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7a1abf4048e07e8bc9343e0dfe167284107c6c27 | 16,752 | py | Python | sdk/python/pulumi_aws/ec2/managed_prefix_list.py | jen20/pulumi-aws | 172e00c642adc03238f89cc9c5a16b914a77c2b1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/ec2/managed_prefix_list.py | jen20/pulumi-aws | 172e00c642adc03238f89cc9c5a16b914a77c2b1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/ec2/managed_prefix_list.py | jen20/pulumi-aws | 172e00c642adc03238f89cc9c5a16b914a77c2b1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities, _tables
from . import outputs
from ._inputs import *
__all__ = ['ManagedPrefixListArgs', 'ManagedPrefixList']
@pulumi.input_type
class ManagedPrefixListArgs:
def __init__(__self__, *,
address_family: pulumi.Input[str],
max_entries: pulumi.Input[int],
entries: Optional[pulumi.Input[Sequence[pulumi.Input['ManagedPrefixListEntryArgs']]]] = None,
name: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a ManagedPrefixList resource.
:param pulumi.Input[str] address_family: The address family (`IPv4` or `IPv6`) of
this prefix list.
:param pulumi.Input[int] max_entries: The maximum number of entries that
this prefix list can contain.
:param pulumi.Input[Sequence[pulumi.Input['ManagedPrefixListEntryArgs']]] entries: Can be specified multiple times for each prefix list entry.
Each entry block supports fields documented below. Different entries may have
overlapping CIDR blocks, but a particular CIDR should not be duplicated.
:param pulumi.Input[str] name: The name of this resource. The name must not start with `com.amazonaws`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to this resource.
"""
pulumi.set(__self__, "address_family", address_family)
pulumi.set(__self__, "max_entries", max_entries)
if entries is not None:
pulumi.set(__self__, "entries", entries)
if name is not None:
pulumi.set(__self__, "name", name)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="addressFamily")
def address_family(self) -> pulumi.Input[str]:
"""
The address family (`IPv4` or `IPv6`) of
this prefix list.
"""
return pulumi.get(self, "address_family")
@address_family.setter
def address_family(self, value: pulumi.Input[str]):
pulumi.set(self, "address_family", value)
@property
@pulumi.getter(name="maxEntries")
def max_entries(self) -> pulumi.Input[int]:
"""
The maximum number of entries that
this prefix list can contain.
"""
return pulumi.get(self, "max_entries")
@max_entries.setter
def max_entries(self, value: pulumi.Input[int]):
pulumi.set(self, "max_entries", value)
@property
@pulumi.getter
def entries(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ManagedPrefixListEntryArgs']]]]:
"""
Can be specified multiple times for each prefix list entry.
Each entry block supports fields documented below. Different entries may have
overlapping CIDR blocks, but a particular CIDR should not be duplicated.
"""
return pulumi.get(self, "entries")
@entries.setter
def entries(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ManagedPrefixListEntryArgs']]]]):
pulumi.set(self, "entries", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of this resource. The name must not start with `com.amazonaws`.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags to assign to this resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
class ManagedPrefixList(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
address_family: Optional[pulumi.Input[str]] = None,
entries: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ManagedPrefixListEntryArgs']]]]] = None,
max_entries: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None,
__name__=None,
__opts__=None):
"""
Provides a managed prefix list resource.
> **NOTE on `max_entries`:** When you reference a Prefix List in a resource,
the maximum number of entries for the prefix lists counts as the same number of rules
or entries for the resource. For example, if you create a prefix list with a maximum
of 20 entries and you reference that prefix list in a security group rule, this counts
as 20 rules for the security group.
## Example Usage
Basic usage
```python
import pulumi
import pulumi_aws as aws
example = aws.ec2.ManagedPrefixList("example",
address_family="IPv4",
max_entries=5,
entries=[
aws.ec2.ManagedPrefixListEntryArgs(
cidr=aws_vpc["example"]["cidr_block"],
description="Primary",
),
aws.ec2.ManagedPrefixListEntryArgs(
cidr=aws_vpc_ipv4_cidr_block_association["example"]["cidr_block"],
description="Secondary",
),
],
tags={
"Env": "live",
})
```
## Import
Prefix Lists can be imported using the `id`, e.g.
```sh
$ pulumi import aws:ec2/managedPrefixList:ManagedPrefixList default pl-0570a1d2d725c16be
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] address_family: The address family (`IPv4` or `IPv6`) of
this prefix list.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ManagedPrefixListEntryArgs']]]] entries: Can be specified multiple times for each prefix list entry.
Each entry block supports fields documented below. Different entries may have
overlapping CIDR blocks, but a particular CIDR should not be duplicated.
:param pulumi.Input[int] max_entries: The maximum number of entries that
this prefix list can contain.
:param pulumi.Input[str] name: The name of this resource. The name must not start with `com.amazonaws`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to this resource.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ManagedPrefixListArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a managed prefix list resource.
> **NOTE on `max_entries`:** When you reference a Prefix List in a resource,
the maximum number of entries for the prefix lists counts as the same number of rules
or entries for the resource. For example, if you create a prefix list with a maximum
of 20 entries and you reference that prefix list in a security group rule, this counts
as 20 rules for the security group.
## Example Usage
Basic usage
```python
import pulumi
import pulumi_aws as aws
example = aws.ec2.ManagedPrefixList("example",
address_family="IPv4",
max_entries=5,
entries=[
aws.ec2.ManagedPrefixListEntryArgs(
cidr=aws_vpc["example"]["cidr_block"],
description="Primary",
),
aws.ec2.ManagedPrefixListEntryArgs(
cidr=aws_vpc_ipv4_cidr_block_association["example"]["cidr_block"],
description="Secondary",
),
],
tags={
"Env": "live",
})
```
## Import
Prefix Lists can be imported using the `id`, e.g.
```sh
$ pulumi import aws:ec2/managedPrefixList:ManagedPrefixList default pl-0570a1d2d725c16be
```
:param str resource_name: The name of the resource.
:param ManagedPrefixListArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ManagedPrefixListArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
address_family: Optional[pulumi.Input[str]] = None,
entries: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ManagedPrefixListEntryArgs']]]]] = None,
max_entries: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
__props__=None,
__name__=None,
__opts__=None):
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
if address_family is None and not opts.urn:
raise TypeError("Missing required property 'address_family'")
__props__['address_family'] = address_family
__props__['entries'] = entries
if max_entries is None and not opts.urn:
raise TypeError("Missing required property 'max_entries'")
__props__['max_entries'] = max_entries
__props__['name'] = name
__props__['tags'] = tags
__props__['arn'] = None
__props__['owner_id'] = None
__props__['version'] = None
super(ManagedPrefixList, __self__).__init__(
'aws:ec2/managedPrefixList:ManagedPrefixList',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
address_family: Optional[pulumi.Input[str]] = None,
arn: Optional[pulumi.Input[str]] = None,
entries: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ManagedPrefixListEntryArgs']]]]] = None,
max_entries: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
owner_id: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
version: Optional[pulumi.Input[int]] = None) -> 'ManagedPrefixList':
"""
Get an existing ManagedPrefixList resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] address_family: The address family (`IPv4` or `IPv6`) of
this prefix list.
:param pulumi.Input[str] arn: The ARN of the prefix list.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['ManagedPrefixListEntryArgs']]]] entries: Can be specified multiple times for each prefix list entry.
Each entry block supports fields documented below. Different entries may have
overlapping CIDR blocks, but a particular CIDR should not be duplicated.
:param pulumi.Input[int] max_entries: The maximum number of entries that
this prefix list can contain.
:param pulumi.Input[str] name: The name of this resource. The name must not start with `com.amazonaws`.
:param pulumi.Input[str] owner_id: The ID of the AWS account that owns this prefix list.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to this resource.
:param pulumi.Input[int] version: The latest version of this prefix list.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["address_family"] = address_family
__props__["arn"] = arn
__props__["entries"] = entries
__props__["max_entries"] = max_entries
__props__["name"] = name
__props__["owner_id"] = owner_id
__props__["tags"] = tags
__props__["version"] = version
return ManagedPrefixList(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="addressFamily")
def address_family(self) -> pulumi.Output[str]:
"""
The address family (`IPv4` or `IPv6`) of
this prefix list.
"""
return pulumi.get(self, "address_family")
@property
@pulumi.getter
def arn(self) -> pulumi.Output[str]:
"""
The ARN of the prefix list.
"""
return pulumi.get(self, "arn")
@property
@pulumi.getter
def entries(self) -> pulumi.Output[Optional[Sequence['outputs.ManagedPrefixListEntry']]]:
"""
Can be specified multiple times for each prefix list entry.
Each entry block supports fields documented below. Different entries may have
overlapping CIDR blocks, but a particular CIDR should not be duplicated.
"""
return pulumi.get(self, "entries")
@property
@pulumi.getter(name="maxEntries")
def max_entries(self) -> pulumi.Output[int]:
"""
The maximum number of entries that
this prefix list can contain.
"""
return pulumi.get(self, "max_entries")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of this resource. The name must not start with `com.amazonaws`.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="ownerId")
def owner_id(self) -> pulumi.Output[str]:
"""
The ID of the AWS account that owns this prefix list.
"""
return pulumi.get(self, "owner_id")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A map of tags to assign to this resource.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter
def version(self) -> pulumi.Output[int]:
"""
The latest version of this prefix list.
"""
return pulumi.get(self, "version")
def translate_output_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return _tables.SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 41.465347 | 168 | 0.621299 | 1,914 | 16,752 | 5.259666 | 0.114943 | 0.078673 | 0.045893 | 0.023244 | 0.761697 | 0.714215 | 0.661071 | 0.6268 | 0.603258 | 0.603258 | 0 | 0.004662 | 0.283011 | 16,752 | 403 | 169 | 41.568238 | 0.833486 | 0.396609 | 0 | 0.37766 | 1 | 0 | 0.109348 | 0.028329 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138298 | false | 0.005319 | 0.037234 | 0.010638 | 0.271277 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7a1ed1421848b1354b08c81026945785b3714d10 | 10,544 | py | Python | amy/workshops/migrations/0158_curriculum_workshoprequest.py | code-review-doctor/amy | 268c1a199510457891459f3ddd73fcce7fe2b974 | [
"MIT"
] | 53 | 2015-01-10T17:39:19.000Z | 2019-06-12T17:36:34.000Z | amy/workshops/migrations/0158_curriculum_workshoprequest.py | code-review-doctor/amy | 268c1a199510457891459f3ddd73fcce7fe2b974 | [
"MIT"
] | 1,176 | 2015-01-02T06:32:47.000Z | 2019-06-18T11:57:47.000Z | amy/workshops/migrations/0158_curriculum_workshoprequest.py | code-review-doctor/amy | 268c1a199510457891459f3ddd73fcce7fe2b974 | [
"MIT"
] | 44 | 2015-01-03T15:08:56.000Z | 2019-06-09T05:33:08.000Z | # Generated by Django 2.1.2 on 2018-10-27 15:50
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django_countries.fields
class Migration(migrations.Migration):
dependencies = [
('workshops', '0157_auto_20181006_0859'),
]
operations = [
migrations.CreateModel(
name='Curriculum',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('active', models.BooleanField(default=True)),
('slug', models.CharField(default='', help_text="Use computer-friendly text here, e.g. 'dc-ecology-r'.", max_length=40, unique=True, verbose_name='Curriculum ID')),
('name', models.CharField(default='', help_text="Use user-friendly language, e.g. 'Data Carpentry (Ecology with R)'.", max_length=200, unique=True, verbose_name='Curriculum name')),
('unknown', models.BooleanField(blank=True, default=False, help_text="Mark this curriculum record as 'I don't know yet', or 'Unknown', or 'Not sure yet'. There can be only one such record in the database.", verbose_name='Unknown entry')),
],
options={
'verbose_name': 'Curriculum',
'verbose_name_plural': 'Curricula',
},
),
migrations.CreateModel(
name='WorkshopRequest',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('last_updated_at', models.DateTimeField(auto_now=True, null=True)),
('data_privacy_agreement', models.BooleanField(default=False, verbose_name='I have read and agree to <a href="https://docs.carpentries.org/topic_folders/policies/privacy.html", target="_blank">the data privacy policy</a> of The Carpentries.')),
('code_of_conduct_agreement', models.BooleanField(default=False, verbose_name='I agree to abide by The Carpentries\' <a target="_blank"href="https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html">Code of Conduct</a>.')),
('host_responsibilities', models.BooleanField(default=False, verbose_name='I understand <a href="https://docs.carpentries.org/topic_folders/hosts_instructors/index.html">the responsibilities of the workshop host</a>, including recruiting local helpers to support the workshop (1 helper for every 8-10 learners).')),
('state', models.CharField(choices=[('p', 'Pending'), ('d', 'Discarded'), ('a', 'Accepted')], default='p', max_length=1)),
('personal', models.CharField(max_length=255, verbose_name='Personal (given) name')),
('family', models.CharField(max_length=255, verbose_name='Family name (surname)')),
('email', models.EmailField(max_length=254, verbose_name='Email address')),
('institution_name', models.CharField(blank=True, default='', max_length=255, verbose_name="Name of institution you're affiliated with")),
('institution_department', models.CharField(blank=True, default='', max_length=255, verbose_name='Department/school affiliation (if applicable)')),
('location', models.CharField(default='', help_text='City, province/state.', max_length=255, verbose_name='Location')),
('country', django_countries.fields.CountryField(max_length=2, verbose_name='Country')),
('part_of_conference', models.BooleanField(help_text='We can manage registration and other coordination for our workshop, but not other conference activities.', verbose_name='Is this workshop part of conference or larger event?')),
('conference_details', models.CharField(blank=True, default='', help_text='Name, description (if applicable).', max_length=255, verbose_name='Conference details')),
('preferred_dates', models.CharField(default='', help_text='Because we need to coordinate with instructors, a minimum of 2-3 months lead time is required for workshop planning.', max_length=255, verbose_name='Preferred dates or date range')),
('number_attendees', models.CharField(choices=[('10-40', '10-40 (one room, two instructors)'), ('40-80', '40-80 (two rooms, four instructors)'), ('80-120', '80-120 (three rooms, six instructors)')], default='10-40', help_text="This number doesn't need to be precise, but will help us decide how many instructors your workshop will need. Each workshop must have at least two instructors.", max_length=15, verbose_name='Number of attendees')),
('domains_other', models.CharField(blank=True, default='', max_length=255, verbose_name='Other domains')),
('audience_description', models.TextField(verbose_name='Please describe your anticipated audience, including their experience, background, and goals')),
('organization_type', models.CharField(choices=[('self', 'Self-organized'), ('central', 'Centrally-organized')], default=None, max_length=15, verbose_name='Will this be a self-organized or centrally-organized workshop?')),
('self_organized_github', models.CharField(blank=True, help_text='Please provide URL.', max_length=255, verbose_name='Link to workshop GitHub page')),
('centrally_organized_fee', models.CharField(blank=True, choices=[('', 'Not applicable.'), ('nonprofit', 'I am with a government site, university, or other nonprofit. I understand the workshop fee of US$2500, and agree to follow through on The Carpentries invoicing process.'), ('forprofit', 'I am with a corporate or for-profit site. I understand The Carpentries staff will contact me about workshop fees. I will follow through on The Carpentries invoicing process for the agreed upon fee.'), ('member', 'I am with a Member Organisation so the workshop fee does not apply (Instructor travel costs will still apply).'), ('waiver', 'I am requesting a waiver of the workshop fee (Instructor travel costs will still apply).')], default='', max_length=20, verbose_name='Which of the following applies to your payment for the administrative fee?')),
('waiver_circumstances', models.TextField(blank=True, help_text='Required only if you request a waiver.', verbose_name='Please explain the circumstances for your waiver request')),
('travel_expences_agreement', models.BooleanField(default=False, verbose_name='Regardless of the fee due to The Carpentries, I understand I am also responsible for travel costs for the Instructors which can include airfare, ground travel, hotel, and meals/incidentals. I understand local Instructors will be prioritized but not guaranteed. Instructor travel costs are managed directly between the host site and the Instructors, not through The Carpentries. I will share detailed information regarding policies and procedures for travel arrangements with instructors. All reimbursements will be completed within 60 days of the workshop.')),
('travel_expences_management', models.CharField(choices=[('booked', 'Hotel and airfare will be booked by site; ground travel and meals/incidentals will be reimbursed within 60 days.'), ('reimbursed', 'All expenses will be booked by instructors and reimbursed within 60 days.'), ('', 'Other:')], max_length=20, verbose_name='How will you manage travel expenses for Carpentries Instructors?')),
('travel_expences_management_other', models.CharField(blank=True, default='', max_length=255, verbose_name='Other travel expences management')),
('comment', models.TextField(blank=True, verbose_name='Is there anything else you would like to share with us?')),
('academic_levels', models.ManyToManyField(help_text="If you know the academic level(s) of your attendees, indicate them here.'", to='workshops.AcademicLevel', verbose_name="Attendees' academic level / career stage")),
('assigned_to', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL)),
('computing_levels', models.ManyToManyField(help_text="Indicate the attendees' level of computing experience, if known. We will ask attendees to fill in a skills survey before the workshop, so this answer can be an approximation.", to='workshops.ComputingExperienceLevel', verbose_name="Attendees' level of computing experience")),
('domains', models.ManyToManyField(help_text="The attendees' academic field(s) of study, if known.", to='workshops.KnowledgeDomain', verbose_name='Domains or topic of interest for target audience')),
('institution', models.ForeignKey(blank=True, help_text="If your institution isn't on the list, enter its name in the field below.", null=True, on_delete=django.db.models.deletion.PROTECT, to='workshops.Organization', verbose_name='Institutional affiliation')),
('language', models.ForeignKey(help_text='Our workshops are offered primarily in English, with a few of our lessons available in Spanish. While materials are mainly in English, we know it can be valuable to have an instructor who speaks the native language of the learners. We will attempt to locate Instructors speaking a particular language, but cannot guarantee the availability of non-English speaking Instructors.', on_delete=django.db.models.deletion.PROTECT, to='workshops.Language', verbose_name='Language')),
('requested_workshop_types', models.ManyToManyField(help_text="If your learners are new to programming and primarily interested in working with data, Data Carpentry is likely the best choice. If your learners are interested in learning more about programming, including version control and automation, Software Carpentry is likely the best match. Please visit the <a href='https://software-carpentry.org/lessons/'>Software Carpentry lessons page</a> or the <a href='http://www.datacarpentry.org/lessons/'>Data Carpentry lessons page</a> for more information about any of our lessons. If you’re not sure and would like to discuss with us, please select the 'Not sure' option below.", limit_choices_to={'active': True}, to='workshops.Curriculum', verbose_name='Which Carpentry workshop are you requesting?')),
],
options={
'ordering': ['created_at'],
},
),
]
| 142.486486 | 860 | 0.71434 | 1,366 | 10,544 | 5.414348 | 0.300878 | 0.05503 | 0.016225 | 0.02569 | 0.240265 | 0.157788 | 0.1394 | 0.104516 | 0.066252 | 0.053272 | 0 | 0.014292 | 0.170524 | 10,544 | 73 | 861 | 144.438356 | 0.831351 | 0.004268 | 0 | 0.179104 | 1 | 0.208955 | 0.573974 | 0.035058 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.059701 | 0 | 0.104478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e1311759e08a6c90f2dd14452c29543ae793ad35 | 1,797 | py | Python | sap hana/connections and query execution with python/script.py | Phelipe-Sempreboni/databases | 3be823db9029994d7b50d23d1830209276e5f40a | [
"MIT"
] | 1 | 2020-10-27T21:50:28.000Z | 2020-10-27T21:50:28.000Z | sap hana/connections and query execution with python/script.py | Phelipe-Sempreboni/databases | 3be823db9029994d7b50d23d1830209276e5f40a | [
"MIT"
] | null | null | null | sap hana/connections and query execution with python/script.py | Phelipe-Sempreboni/databases | 3be823db9029994d7b50d23d1830209276e5f40a | [
"MIT"
] | null | null | null | # Importação da biblioteca.
# Certifique-se de ter a biblioteca instalada.
import pyhdb
# Essa função traz/chama outro arquivo que contém a senha, visando não deixar exposta na aplicação.
# Caso não queira utilizar esse método e inserir diretamente a senha na conexão, exclua esse bloco e insira a senha diretamente no bloco (def connect) em (passoword).
def pass_location():
pass_file = "" # Insira o caminho do arquivo com a senha.
file = open(pass_file, 'r') # Leiteura do arquivo com read.
return file.read()
# Realiza a conexão com o Sap Hana.
def connect():
try:
connection = pyhdb.connect(
host = "", # Insira o server.
port=, # Insira a porta, normalmente númerica, caso não seja, utilize o ("") para digitar a localidade da porta.
user="", # Insira o usuário.
password=pass_location() # Aqui estamos utilizando o bloco (pass_location) que busca a senha em outro arquivo, caso não queira exclua o bloco e insira a senha.
)
return connection.cursor()
except:
return 1
# Executa a query no Sap Hana.
def query_exec():
#A query de exemplo abaixo lista 10 instalacoes da tabela de dados mestres de instalacao
cursor = connect()
cursor.execute("SET SCHEMA SAPBP1") # Insira o schema do banco de dados.
cursor.execute("SELECT top 10 "'"/BIC/EPINSTALA"'" \
from "'"SAPBP1"'"."'"/BIC/PEPINSTALA"'" ") # Insira a query.
result = cursor.fetchall() # Retorna todos os resultados.
cursor.close() # Encerra o cursor.
return result
if __name__ == '__main__':
connect() # Execução a função de conexão.
resultado = query_exec() # Executa a função de execução da query.
print (resultado) # Imprimi o resultado no terminal. | 41.790698 | 171 | 0.670006 | 244 | 1,797 | 4.872951 | 0.491803 | 0.030278 | 0.021867 | 0.021867 | 0.030278 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005147 | 0.243183 | 1,797 | 43 | 172 | 41.790698 | 0.869118 | 0.571508 | 0 | 0 | 0 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0.023256 | 0 | 0 | null | null | 0.142857 | 0.035714 | null | null | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
e136e8225ad172a851846dc46f34389a3f760935 | 65 | py | Python | 1/0/10821/10821.py | chr0m3/boj-codes | d71d0a22d0a3ae62c225f382442461275f56fe8f | [
"MIT"
] | 3 | 2017-07-08T16:29:06.000Z | 2020-07-20T00:17:45.000Z | 1/0/10821/10821.py | chr0m3/boj-codes | d71d0a22d0a3ae62c225f382442461275f56fe8f | [
"MIT"
] | null | null | null | 1/0/10821/10821.py | chr0m3/boj-codes | d71d0a22d0a3ae62c225f382442461275f56fe8f | [
"MIT"
] | 2 | 2017-11-20T14:06:06.000Z | 2020-07-20T00:17:47.000Z | numbers = list(map(int, input().split(',')))
print(len(numbers))
| 21.666667 | 44 | 0.646154 | 9 | 65 | 4.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 65 | 2 | 45 | 32.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
e141a2ac84bf3c71baee17e1baf51d264eb93a13 | 94 | py | Python | pyEDAA/OutputFilter/__init__.py | edaa-org/pyEDAA.OutputFilter | ca602c9992b40df7bd117968c0dc333a4f16d255 | [
"Apache-2.0"
] | 1 | 2021-12-30T02:49:43.000Z | 2021-12-30T02:49:43.000Z | pyEDAA/OutputFilter/__init__.py | edaa-org/pyEDAA.OutputFilter | ca602c9992b40df7bd117968c0dc333a4f16d255 | [
"Apache-2.0"
] | null | null | null | pyEDAA/OutputFilter/__init__.py | edaa-org/pyEDAA.OutputFilter | ca602c9992b40df7bd117968c0dc333a4f16d255 | [
"Apache-2.0"
] | null | null | null |
from pyTooling.Decorators import export
__version__ = "0.1.0"
@export
class Filter:
pass
| 9.4 | 39 | 0.744681 | 13 | 94 | 5.076923 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.170213 | 94 | 9 | 40 | 10.444444 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0.053763 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
e15d5e4f994eb2a0d1d81f21279363ad0216ad9f | 1,283 | py | Python | app/machine_learning/utils.py | jonzxz/project-piscator | 588c8b1ac9355f9a82ac449fdbeaa1ef7eb441ef | [
"MIT"
] | null | null | null | app/machine_learning/utils.py | jonzxz/project-piscator | 588c8b1ac9355f9a82ac449fdbeaa1ef7eb441ef | [
"MIT"
] | null | null | null | app/machine_learning/utils.py | jonzxz/project-piscator | 588c8b1ac9355f9a82ac449fdbeaa1ef7eb441ef | [
"MIT"
] | 1 | 2021-02-18T03:08:21.000Z | 2021-02-18T03:08:21.000Z | import re
import joblib
from sklearn.ensemble import RandomForestClassifier
from typing import List, Tuple
# Cleans up a messed up HTML / tabbed raw content into space delimited content
def clean_up_raw_body(raw_text: str) -> str:
return ' '.join([line.strip() for line in raw_text.strip().splitlines() \
if line.strip()])
# Flattens a list of tuples for (Sender, SenderDomain) into [Sender, SenderDomain]
# By right there SHOULD only be a single pair but kept in list just in case!
# Even indexes are Sender and odd indexs are SenderDomains
def flatten_from_tuples(list_tupl: List[Tuple]) -> List:
return [item for tup in list_tupl for item in tup]
# Retrieves a list of [Sender, SenderDomain] and returns domain names only
# eg. ['Person', 'Person@Company.com']
# Returns [Company.com]
# By right there should only be one entry but kept in list just in case
# set list to remove duplicates
def identify_domains(list_of_sender_domain_pairs: List):
if isinstance(list_of_sender_domain_pairs, list):
return list(set([item.split(sep='@')[1] for item \
in list_of_sender_domain_pairs if '@' in item]))
return list_of_sender_domain_pairs.split(sep='@')[-1]
def load_model(MODEL_NAME) -> RandomForestClassifier:
return joblib.load(MODEL_NAME)
| 42.766667 | 82 | 0.747467 | 200 | 1,283 | 4.65 | 0.43 | 0.03871 | 0.064516 | 0.077419 | 0.208602 | 0.15914 | 0.049462 | 0 | 0 | 0 | 0 | 0.00186 | 0.16212 | 1,283 | 29 | 83 | 44.241379 | 0.863256 | 0.40608 | 0 | 0 | 0 | 0 | 0.005319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.1875 | 0.8125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
e163f95d62fc70e17e021921220d7ea02e910aa6 | 23,493 | py | Python | superslomo/model.py | myungsub/meta-interpolation | f7afee9d1786f67e6f548c2734f91858f803c5dc | [
"MIT"
] | 74 | 2020-04-03T06:26:39.000Z | 2022-03-25T16:51:28.000Z | superslomo/model.py | baiksung/meta-interpolation | 72dd3b2e56054bb411ed20301583a0e67d9ea293 | [
"MIT"
] | 6 | 2020-07-09T20:09:23.000Z | 2021-09-20T11:12:24.000Z | superslomo/model.py | baiksung/meta-interpolation | 72dd3b2e56054bb411ed20301583a0e67d9ea293 | [
"MIT"
] | 19 | 2020-04-16T09:18:38.000Z | 2021-12-28T08:25:12.000Z | import torch
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from model_utils import *
class down(nn.Module):
"""
A class for creating neural network blocks containing layers:
Average Pooling --> Convlution + Leaky ReLU --> Convolution + Leaky ReLU
This is used in the UNet Class to create a UNet like NN architecture.
...
Methods
-------
forward(x)
Returns output tensor after passing input `x` to the neural network
block.
"""
def __init__(self, inChannels, outChannels, filterSize):
"""
Parameters
----------
inChannels : int
number of input channels for the first convolutional layer.
outChannels : int
number of output channels for the first convolutional layer.
This is also used as input and output channels for the
second convolutional layer.
filterSize : int
filter size for the convolution filter. input N would create
a N x N filter.
"""
super(down, self).__init__()
# Initialize convolutional layers.
# self.conv1 = nn.Conv2d(inChannels, outChannels, filterSize, stride=1, padding=int((filterSize - 1) / 2))
# self.conv2 = nn.Conv2d(outChannels, outChannels, filterSize, stride=1, padding=int((filterSize - 1) / 2))
self.conv1 = MetaConv2dLayer(in_channels=inChannels, out_channels=outChannels, kernel_size=filterSize, stride=1, padding=int((filterSize - 1) / 2))
self.conv2 = MetaConv2dLayer(in_channels=outChannels, out_channels=outChannels, kernel_size=filterSize, stride=1, padding=int((filterSize - 1) / 2))
def forward(self, x, params=None):
"""
Returns output tensor after passing input `x` to the neural network
block.
Parameters
----------
x : tensor
input to the NN block.
Returns
-------
tensor
output of the NN block.
"""
# Average pooling with kernel size 2 (2 x 2).
x = F.avg_pool2d(x, 2)
# (Convolution + Leaky ReLU) x 2
param_dict = dict()
if params is not None:
param_dict = extract_top_level_dict(current_dict=params)
x = F.leaky_relu(self.conv1(x, params=param_dict['conv1']), negative_slope = 0.1)
x = F.leaky_relu(self.conv2(x, params=param_dict['conv2']), negative_slope = 0.1)
else:
x = F.leaky_relu(self.conv1(x), negative_slope = 0.1)
x = F.leaky_relu(self.conv2(x), negative_slope = 0.1)
return x
class up(nn.Module):
"""
A class for creating neural network blocks containing layers:
Bilinear interpolation --> Convlution + Leaky ReLU --> Convolution + Leaky ReLU
This is used in the UNet Class to create a UNet like NN architecture.
...
Methods
-------
forward(x, skpCn)
Returns output tensor after passing input `x` to the neural network
block.
"""
def __init__(self, inChannels, outChannels):
"""
Parameters
----------
inChannels : int
number of input channels for the first convolutional layer.
outChannels : int
number of output channels for the first convolutional layer.
This is also used for setting input and output channels for
the second convolutional layer.
"""
super(up, self).__init__()
# Initialize convolutional layers.
# self.conv1 = nn.Conv2d(inChannels, outChannels, 3, stride=1, padding=1)
self.conv1 = MetaConv2dLayer(in_channels=inChannels, out_channels=outChannels, kernel_size=3, stride=1, padding=1)
# (2 * outChannels) is used for accommodating skip connection.
# self.conv2 = nn.Conv2d(2 * outChannels, outChannels, 3, stride=1, padding=1)
self.conv2 = MetaConv2dLayer(in_channels=2 * outChannels, out_channels=outChannels, kernel_size=3, stride=1, padding=1)
def forward(self, x, skpCn, params=None):
"""
Returns output tensor after passing input `x` to the neural network
block.
Parameters
----------
x : tensor
input to the NN block.
skpCn : tensor
skip connection input to the NN block.
Returns
-------
tensor
output of the NN block.
"""
# Bilinear interpolation with scaling 2.
x = F.interpolate(x, scale_factor=2, mode='bilinear')
param_dict = dict()
if params is not None:
param_dict = extract_top_level_dict(current_dict=params)
# Convolution + Leaky ReLU
x = F.leaky_relu(self.conv1(x, params=param_dict['conv1']), negative_slope = 0.1)
# Convolution + Leaky ReLU on (`x`, `skpCn`)
x = F.leaky_relu(self.conv2(torch.cat((x, skpCn), 1), params=param_dict['conv2']), negative_slope = 0.1)
else:
# Convolution + Leaky ReLU
x = F.leaky_relu(self.conv1(x), negative_slope = 0.1)
# Convolution + Leaky ReLU on (`x`, `skpCn`)
x = F.leaky_relu(self.conv2(torch.cat((x, skpCn), 1)), negative_slope = 0.1)
return x
class UNet(nn.Module):
"""
A class for creating UNet like architecture as specified by the
Super SloMo paper.
...
Methods
-------
forward(x)
Returns output tensor after passing input `x` to the neural network
block.
"""
def __init__(self, inChannels, outChannels):
"""
Parameters
----------
inChannels : int
number of input channels for the UNet.
outChannels : int
number of output channels for the UNet.
"""
super(UNet, self).__init__()
# Initialize neural network blocks.
self.conv1 = nn.Conv2d(inChannels, 32, 7, stride=1, padding=3)
self.conv2 = nn.Conv2d(32, 32, 7, stride=1, padding=3)
self.down1 = down(32, 64, 5)
self.down2 = down(64, 128, 3)
self.down3 = down(128, 256, 3)
self.down4 = down(256, 512, 3)
self.down5 = down(512, 512, 3)
self.up1 = up(512, 512)
self.up2 = up(512, 256)
self.up3 = up(256, 128)
self.up4 = up(128, 64)
self.up5 = up(64, 32)
self.conv3 = nn.Conv2d(32, outChannels, 3, stride=1, padding=1)
def forward(self, x):
"""
Returns output tensor after passing input `x` to the neural network.
Parameters
----------
x : tensor
input to the UNet.
Returns
-------
tensor
output of the UNet.
"""
x = F.leaky_relu(self.conv1(x), negative_slope = 0.1)
s1 = F.leaky_relu(self.conv2(x), negative_slope = 0.1)
s2 = self.down1(s1)
s3 = self.down2(s2)
s4 = self.down3(s3)
s5 = self.down4(s4)
x = self.down5(s5)
x = self.up1(x, s5)
x = self.up2(x, s4)
x = self.up3(x, s3)
x = self.up4(x, s2)
x = self.up5(x, s1)
x = F.leaky_relu(self.conv3(x), negative_slope = 0.1)
return x
class backWarp(nn.Module):
"""
A class for creating a backwarping object.
This is used for backwarping to an image:
Given optical flow from frame I0 to I1 --> F_0_1 and frame I1,
it generates I0 <-- backwarp(F_0_1, I1).
...
Methods
-------
forward(x)
Returns output tensor after passing input `img` and `flow` to the backwarping
block.
"""
def __init__(self, W, H, device):
"""
Parameters
----------
W : int
width of the image.
H : int
height of the image.
device : device
computation device (cpu/cuda).
"""
super(backWarp, self).__init__()
# create a grid
gridX, gridY = np.meshgrid(np.arange(W), np.arange(H))
self.W = W
self.H = H
self.gridX = torch.tensor(gridX, requires_grad=False, device=device)
self.gridY = torch.tensor(gridY, requires_grad=False, device=device)
def forward(self, img, flow):
"""
Returns output tensor after passing input `img` and `flow` to the backwarping
block.
I0 = backwarp(I1, F_0_1)
Parameters
----------
img : tensor
frame I1.
flow : tensor
optical flow from I0 and I1: F_0_1.
Returns
-------
tensor
frame I0.
"""
# Extract horizontal and vertical flows.
u = flow[:, 0, :, :]
v = flow[:, 1, :, :]
x = self.gridX.unsqueeze(0).expand_as(u).float() + u
y = self.gridY.unsqueeze(0).expand_as(v).float() + v
# range -1 to 1
x = 2*(x/self.W - 0.5)
y = 2*(y/self.H - 0.5)
# stacking X and Y
grid = torch.stack((x,y), dim=3)
# Sample pixels using bilinear interpolation.
imgOut = torch.nn.functional.grid_sample(img, grid)
return imgOut
# Creating an array of `t` values for the 7 intermediate frames between
# reference frames I0 and I1.
t = np.linspace(0.125, 0.875, 7)
def getFlowCoeff (indices, device):
"""
Gets flow coefficients used for calculating intermediate optical
flows from optical flows between I0 and I1: F_0_1 and F_1_0.
F_t_0 = C00 x F_0_1 + C01 x F_1_0
F_t_1 = C10 x F_0_1 + C11 x F_1_0
where,
C00 = -(1 - t) x t
C01 = t x t
C10 = (1 - t) x (1 - t)
C11 = -t x (1 - t)
Parameters
----------
indices : tensor
indices corresponding to the intermediate frame positions
of all samples in the batch.
device : device
computation device (cpu/cuda).
Returns
-------
tensor
coefficients C00, C01, C10, C11.
"""
# Convert indices tensor to numpy array
ind = indices.detach().numpy()
C11 = C00 = - (1 - (t[ind])) * (t[ind])
C01 = (t[ind]) * (t[ind])
C10 = (1 - (t[ind])) * (1 - (t[ind]))
return torch.Tensor(C00)[None, None, None, :].permute(3, 0, 1, 2).to(device), torch.Tensor(C01)[None, None, None, :].permute(3, 0, 1, 2).to(device), torch.Tensor(C10)[None, None, None, :].permute(3, 0, 1, 2).to(device), torch.Tensor(C11)[None, None, None, :].permute(3, 0, 1, 2).to(device)
def getWarpCoeff (indices, device):
"""
Gets coefficients used for calculating final intermediate
frame `It_gen` from backwarped images using flows F_t_0 and F_t_1.
It_gen = (C0 x V_t_0 x g_I_0_F_t_0 + C1 x V_t_1 x g_I_1_F_t_1) / (C0 x V_t_0 + C1 x V_t_1)
where,
C0 = 1 - t
C1 = t
V_t_0, V_t_1 --> visibility maps
g_I_0_F_t_0, g_I_1_F_t_1 --> backwarped intermediate frames
Parameters
----------
indices : tensor
indices corresponding to the intermediate frame positions
of all samples in the batch.
device : device
computation device (cpu/cuda).
Returns
-------
tensor
coefficients C0 and C1.
"""
# Convert indices tensor to numpy array
ind = indices.detach().numpy()
C0 = 1 - t[ind]
C1 = t[ind]
return torch.Tensor(C0)[None, None, None, :].permute(3, 0, 1, 2).to(device), torch.Tensor(C1)[None, None, None, :].permute(3, 0, 1, 2).to(device)
class SuperSloMoModel(nn.Module):
def __init__(self, device):
super(SuperSloMoModel, self).__init__()
self.device = device
self.flowComp = UNet(6, 4)
self.arbTimeFlowIntrp = UNet(20, 5)
self.backwarp = None
def forward(self, I0, I1, ind):
w, h = I0.size(3), I0.size(2)
s = 6 # bits to shift
padW, padH = 0, 0
if w != ((w >> s) << s):
padW = (((w >> s) + 1) << s) - w
if h != ((h >> s) << s):
padH = (((h >> s) + 1) << s) - h
paddingInput = nn.ReflectionPad2d(padding=[padW // 2, padW - padW // 2, padH // 2, padH - padH // 2])
paddingOutput = nn.ReflectionPad2d(padding=[0 - padW // 2, padW // 2 - padW, 0 - padH // 2, padH // 2 - padH])
I0 = paddingInput(I0)
I1 = paddingInput(I1)
flowOut = self.flowComp(torch.cat((I0, I1), dim=1))
F_0_1 = flowOut[:, :2, :, :]
F_1_0 = flowOut[:, 2:, :, :]
fCoeff = getFlowCoeff(ind, self.device)
F_t_0 = fCoeff[0] * F_0_1 + fCoeff[1] * F_1_0
F_t_1 = fCoeff[2] * F_0_1 + fCoeff[3] * F_1_0
if self.backwarp is None or self.backwarp.W != I0.size(3) or self.backwarp.H != I0.size(2):
self.backwarp = backWarp(I0.size(3), I0.size(2), self.device) # make grid
g_I0_F_t_0 = self.backwarp(I0, F_t_0)
g_I1_F_t_1 = self.backwarp(I1, F_t_1)
intrpOut = self.arbTimeFlowIntrp(torch.cat((I0, I1, F_0_1, F_1_0, F_t_1, F_t_0, g_I1_F_t_1, g_I0_F_t_0), dim=1))
F_t_0_f = intrpOut[:, :2, :, :] + F_t_0
F_t_1_f = intrpOut[:, 2:4, :, :] + F_t_1
V_t_0 = F.sigmoid(intrpOut[:, 4:5, :, :])
V_t_1 = 1 - V_t_0
g_I0_F_t_0_f = self.backwarp(I0, F_t_0_f)
g_I1_F_t_1_f = self.backwarp(I1, F_t_1_f)
wCoeff = getWarpCoeff(ind, self.device)
Ft_p = (wCoeff[0] * V_t_0 * g_I0_F_t_0_f + wCoeff[1] * V_t_1 * g_I1_F_t_1_f) / (wCoeff[0] * V_t_0 + wCoeff[1] * V_t_1)
warped_I0, warped_I1 = self.backwarp(I0, F_1_0), self.backwarp(I1, F_0_1)
Ft_p = paddingOutput(Ft_p)
F_0_1, F_1_0 = paddingOutput(F_0_1), paddingOutput(F_1_0)
g_I0_F_t_0, g_I1_F_t_1 = paddingOutput(g_I0_F_t_0), paddingOutput(g_I1_F_t_1)
warped_I0, warped_I1 = paddingOutput(warped_I0), paddingOutput(warped_I1)
#return Ft_p, # output image
# (F_0_1, F_1_0), # bidirectional flow maps
# (g_I0_F_t_0, g_I1_F_t_1), # warped intermediate images
# (self.backwarp(I0, F_1_0), self.backwarp(I1, F_0_1)) # warped input image (0-1, 1-0)
return Ft_p, \
(F_0_1, F_1_0), \
(g_I0_F_t_0, g_I1_F_t_1), \
(warped_I0, warped_I1)
# (self.backwarp(I0, F_1_0), self.backwarp(I1, F_0_1))
class MetaUNet(nn.Module):
"""
A class for creating UNet like architecture as specified by the
Super SloMo paper.
...
Methods
-------
forward(x)
Returns output tensor after passing input `x` to the neural network
block.
"""
def __init__(self, inChannels, outChannels):
"""
Parameters
----------
inChannels : int
number of input channels for the UNet.
outChannels : int
number of output channels for the UNet.
"""
super(MetaUNet, self).__init__()
# Initialize neural network blocks.
self.conv1 = MetaConv2dLayer(in_channels=inChannels, out_channels=32, kernel_size=7, stride=1, padding=3)
self.conv2 = MetaConv2dLayer(in_channels=32, out_channels=32, kernel_size=7, stride=1, padding=3)
self.down1 = down(32, 64, 5)
self.down2 = down(64, 128, 3)
self.down3 = down(128, 256, 3)
self.down4 = down(256, 512, 3)
self.down5 = down(512, 512, 3)
self.up1 = up(512, 512)
self.up2 = up(512, 256)
self.up3 = up(256, 128)
self.up4 = up(128, 64)
self.up5 = up(64, 32)
self.conv3 = MetaConv2dLayer(in_channels=32, out_channels=outChannels, kernel_size=3, stride=1, padding=1)
def forward(self, x, params=None):
"""
Returns output tensor after passing input `x` to the neural network.
Parameters
----------
x : tensor
input to the UNet.
Returns
-------
tensor
output of the UNet.
"""
param_dict = dict()
if params is not None:
param_dict = extract_top_level_dict(current_dict=params)
x = F.leaky_relu(self.conv1(x, params=param_dict['conv1']), negative_slope = 0.1)
s1 = F.leaky_relu(self.conv2(x, params=param_dict['conv2']), negative_slope = 0.1)
s2 = self.down1(s1, params=param_dict['down1'])
s3 = self.down2(s2, params=param_dict['down2'])
s4 = self.down3(s3, params=param_dict['down3'])
s5 = self.down4(s4, params=param_dict['down4'])
x = self.down5(s5, params=param_dict['down5'])
x = self.up1(x, s5, params=param_dict['up1'])
x = self.up2(x, s4, params=param_dict['up2'])
x = self.up3(x, s3, params=param_dict['up3'])
x = self.up4(x, s2, params=param_dict['up4'])
x = self.up5(x, s1, params=param_dict['up5'])
x = F.leaky_relu(self.conv3(x, params=param_dict['conv3']), negative_slope = 0.1)
else:
x = F.leaky_relu(self.conv1(x), negative_slope = 0.1)
s1 = F.leaky_relu(self.conv2(x), negative_slope = 0.1)
s2 = self.down1(s1)
s3 = self.down2(s2)
s4 = self.down3(s3)
s5 = self.down4(s4)
x = self.down5(s5)
x = self.up1(x, s5)
x = self.up2(x, s4)
x = self.up3(x, s3)
x = self.up4(x, s2)
x = self.up5(x, s1)
x = F.leaky_relu(self.conv3(x), negative_slope = 0.1)
return x
class MetaSuperSloMo(nn.Module):
def __init__(self, device, resume=False):
super(MetaSuperSloMo, self).__init__()
self.device = device
self.flowComp = MetaUNet(6, 4)
self.arbTimeFlowIntrp = MetaUNet(20, 5)
self.backwarp = None
if resume:
print('Loading model: pretrained_models/superslomo_base.pth')
# checkpoint = torch.load('pretrained_models/meta_superslomo.pth')
checkpoint = torch.load('pretrained_models/superslomo_base.pth')
self.flowComp.load_state_dict(checkpoint['state_dictFC'])
self.arbTimeFlowIntrp.load_state_dict(checkpoint['state_dictAT'])
def forward(self, I0, I1, ind=3, params=None, **kwargs):
ind = ind * torch.ones(I0.size(0), dtype=int)
w, h = I0.size(3), I0.size(2)
s = 6 # bits to shift
padW, padH = 0, 0
if w != ((w >> s) << s):
padW = (((w >> s) + 1) << s) - w
if h != ((h >> s) << s):
padH = (((h >> s) + 1) << s) - h
paddingInput = nn.ReflectionPad2d(padding=[padW // 2, padW - padW // 2, padH // 2, padH - padH // 2])
paddingOutput = nn.ReflectionPad2d(padding=[0 - padW // 2, padW // 2 - padW, 0 - padH // 2, padH // 2 - padH])
I0 = paddingInput(I0)
I1 = paddingInput(I1)
param_dict = dict()
if params is not None:
param_dict = extract_top_level_dict(current_dict=params)
flowOut = self.flowComp(torch.cat((I0, I1), dim=1), params=param_dict['flowComp'])
F_0_1 = flowOut[:, :2, :, :]
F_1_0 = flowOut[:, 2:, :, :]
fCoeff = getFlowCoeff(ind, self.device)
F_t_0 = fCoeff[0] * F_0_1 + fCoeff[1] * F_1_0
F_t_1 = fCoeff[2] * F_0_1 + fCoeff[3] * F_1_0
if self.backwarp is None or self.backwarp.W != I0.size(3) or self.backwarp.H != I0.size(2):
self.backwarp = backWarp(I0.size(3), I0.size(2), self.device) # make grid
g_I0_F_t_0 = self.backwarp(I0, F_t_0)
g_I1_F_t_1 = self.backwarp(I1, F_t_1)
intrpOut = self.arbTimeFlowIntrp(torch.cat((I0, I1, F_0_1, F_1_0, F_t_1, F_t_0, g_I1_F_t_1, g_I0_F_t_0), dim=1),
params=param_dict['arbTimeFlowIntrp'])
else:
flowOut = self.flowComp(torch.cat((I0, I1), dim=1))
F_0_1 = flowOut[:, :2, :, :]
F_1_0 = flowOut[:, 2:, :, :]
fCoeff = getFlowCoeff(ind, self.device)
F_t_0 = fCoeff[0] * F_0_1 + fCoeff[1] * F_1_0
F_t_1 = fCoeff[2] * F_0_1 + fCoeff[3] * F_1_0
if self.backwarp is None or self.backwarp.W != I0.size(3) or self.backwarp.H != I0.size(2):
self.backwarp = backWarp(I0.size(3), I0.size(2), self.device) # make grid
g_I0_F_t_0 = self.backwarp(I0, F_t_0)
g_I1_F_t_1 = self.backwarp(I1, F_t_1)
intrpOut = self.arbTimeFlowIntrp(torch.cat((I0, I1, F_0_1, F_1_0, F_t_1, F_t_0, g_I1_F_t_1, g_I0_F_t_0), dim=1))
F_t_0_f = intrpOut[:, :2, :, :] + F_t_0
F_t_1_f = intrpOut[:, 2:4, :, :] + F_t_1
V_t_0 = F.sigmoid(intrpOut[:, 4:5, :, :])
V_t_1 = 1 - V_t_0
g_I0_F_t_0_f = self.backwarp(I0, F_t_0_f)
g_I1_F_t_1_f = self.backwarp(I1, F_t_1_f)
wCoeff = getWarpCoeff(ind, self.device)
Ft_p = (wCoeff[0] * V_t_0 * g_I0_F_t_0_f + wCoeff[1] * V_t_1 * g_I1_F_t_1_f) / (wCoeff[0] * V_t_0 + wCoeff[1] * V_t_1)
warped_I0, warped_I1 = self.backwarp(I0, F_1_0), self.backwarp(I1, F_0_1)
Ft_p = paddingOutput(Ft_p)
F_0_1, F_1_0 = paddingOutput(F_0_1), paddingOutput(F_1_0)
g_I0_F_t_0, g_I1_F_t_1 = paddingOutput(g_I0_F_t_0), paddingOutput(g_I1_F_t_1)
warped_I0, warped_I1 = paddingOutput(warped_I0), paddingOutput(warped_I1)
#return Ft_p, # output image
# (F_0_1, F_1_0), # bidirectional flow maps
# (g_I0_F_t_0, g_I1_F_t_1), # warped intermediate images
# (self.backwarp(I0, F_1_0), self.backwarp(I1, F_0_1)) # warped input image (0-1, 1-0)
return Ft_p, {
'bidirectional_flow': (F_0_1, F_1_0),
'warped_intermediate_frames': (g_I0_F_t_0, g_I1_F_t_1),
'warped_input_frames': (warped_I0, warped_I1)}
# (self.backwarp(I0, F_1_0), self.backwarp(I1, F_0_1))
# return Ft_p
def zero_grad(self, params=None):
if params is None:
for param in self.parameters():
if param.requires_grad == True:
if param.grad is not None:
if torch.sum(param.grad) > 0:
print(param.grad)
param.grad.zero_()
else:
for name, param in params.items():
if param.requires_grad == True:
if param.grad is not None:
if torch.sum(param.grad) > 0:
print(param.grad)
param.grad.zero_()
params[name].grad = None
def restore_backup_stats(self):
"""
Reset stored batch statistics from the stored backup.
"""
pass # no batch statistics used
| 35.011923 | 293 | 0.548717 | 3,266 | 23,493 | 3.755052 | 0.088487 | 0.012068 | 0.009051 | 0.009377 | 0.779273 | 0.743558 | 0.713389 | 0.695695 | 0.683056 | 0.664873 | 0 | 0.060621 | 0.328736 | 23,493 | 670 | 294 | 35.064179 | 0.717058 | 0.29894 | 0 | 0.607774 | 0 | 0 | 0.01877 | 0.006633 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063604 | false | 0.003534 | 0.028269 | 0 | 0.14841 | 0.010601 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e177afe5c4e52b6ea7d71deed0bddae35b953491 | 83 | py | Python | config.py | zombodotcom/twitchUserData | 50c702b832515a946d55c2f5ca79b51436352ef2 | [
"MIT",
"Unlicense"
] | 1 | 2019-10-22T06:23:56.000Z | 2019-10-22T06:23:56.000Z | config.py | zombodotcom/twitchUserData | 50c702b832515a946d55c2f5ca79b51436352ef2 | [
"MIT",
"Unlicense"
] | null | null | null | config.py | zombodotcom/twitchUserData | 50c702b832515a946d55c2f5ca79b51436352ef2 | [
"MIT",
"Unlicense"
] | null | null | null | Client_ID = "<Your Client ID>"
Authorization = "Bearer <Insert Bearer token Here>" | 41.5 | 51 | 0.73494 | 11 | 83 | 5.454545 | 0.727273 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 83 | 2 | 51 | 41.5 | 0.84507 | 0 | 0 | 0 | 0 | 0 | 0.590361 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e189d16da8174a3b154f79a433e1c07828f194cc | 650 | py | Python | tests/unit/proxy/roundtrip/test_janus_graph_proxy.py | joshthoward/amundsenmetadatalibrary | 87e2b44f0e44ca643f087bff6bd6b39d4ae9e9ad | [
"Apache-2.0"
] | 3 | 2021-02-09T13:52:03.000Z | 2022-02-26T02:36:02.000Z | tests/unit/proxy/roundtrip/test_janus_graph_proxy.py | joshthoward/amundsenmetadatalibrary | 87e2b44f0e44ca643f087bff6bd6b39d4ae9e9ad | [
"Apache-2.0"
] | 1 | 2021-02-08T23:21:04.000Z | 2021-02-08T23:21:04.000Z | tests/unit/proxy/roundtrip/test_janus_graph_proxy.py | joshthoward/amundsenmetadatalibrary | 87e2b44f0e44ca643f087bff6bd6b39d4ae9e9ad | [
"Apache-2.0"
] | 2 | 2021-02-23T18:23:35.000Z | 2022-03-18T15:12:25.000Z | # Copyright Contributors to the Amundsen project.
# SPDX-License-Identifier: Apache-2.0
from typing import Any, Mapping
import unittest
from .abstract_gremlin_proxy_tests import abstract_gremlin_proxy_test_class
from .roundtrip_janusgraph_proxy import RoundtripJanusGraphProxy
class JanusGraphGremlinProxyTest(
abstract_gremlin_proxy_test_class(), unittest.TestCase): # type: ignore
def _create_gremlin_proxy(self, config: Mapping[str, Any]) -> RoundtripJanusGraphProxy:
# Don't use PROXY_HOST, PROXY_PORT, PROXY_PASSWORD. They might not be JanusGraph
return RoundtripJanusGraphProxy(host=config['JANUS_GRAPH_URL'])
| 40.625 | 91 | 0.803077 | 78 | 650 | 6.423077 | 0.641026 | 0.095808 | 0.11976 | 0.095808 | 0.115768 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003546 | 0.132308 | 650 | 15 | 92 | 43.333333 | 0.884752 | 0.270769 | 0 | 0 | 0 | 0 | 0.031983 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0.125 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 3 |
e1936080a3e49021025bc658618c985bef1143b7 | 731 | py | Python | tgtypes/models/venue.py | autogram/tgtypes | 90f8d0d35d3c372767508e56c20777635e128e38 | [
"MIT"
] | null | null | null | tgtypes/models/venue.py | autogram/tgtypes | 90f8d0d35d3c372767508e56c20777635e128e38 | [
"MIT"
] | null | null | null | tgtypes/models/venue.py | autogram/tgtypes | 90f8d0d35d3c372767508e56c20777635e128e38 | [
"MIT"
] | null | null | null | from __future__ import annotations
from typing import TYPE_CHECKING, Optional
from ._base import TelegramObject
if TYPE_CHECKING: # pragma: no cover
from .location import Location
class Venue(TelegramObject):
"""
This object represents a venue.
Source: https://core.telegram.org/bots/api#venue
"""
location: Location
"""Venue location"""
title: str
"""Name of the venue"""
address: str
"""Address of the venue"""
foursquare_id: Optional[str] = None
"""Foursquare identifier of the venue"""
foursquare_type: Optional[str] = None
"""Foursquare type of the venue. (For example, 'arts_entertainment/default',
'arts_entertainment/aquarium' or 'food/icecream'.)"""
| 25.206897 | 80 | 0.689466 | 87 | 731 | 5.666667 | 0.54023 | 0.040568 | 0.081136 | 0.081136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199726 | 731 | 28 | 81 | 26.107143 | 0.842735 | 0.135431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.363636 | 0 | 0.909091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
e197f5d2fbd28b451da6017706229aff6b5fef77 | 462 | py | Python | tests/test_push.py | associatedpress/datakit-dworld | 21ccd0e468c7064d62022a2f136c0f8f47bbabb9 | [
"ISC"
] | 2 | 2019-09-07T02:03:46.000Z | 2021-03-06T14:43:01.000Z | tests/test_push.py | associatedpress/datakit-dworld | 21ccd0e468c7064d62022a2f136c0f8f47bbabb9 | [
"ISC"
] | 5 | 2019-09-06T22:24:26.000Z | 2021-04-27T21:42:18.000Z | tests/test_push.py | associatedpress/datakit-dworld | 21ccd0e468c7064d62022a2f136c0f8f47bbabb9 | [
"ISC"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# from unittest import mock
# from datakit_dworld.push import Push
def test_push(capsys):
"""Sample pytest test function with a built-in pytest fixture as an argument.
"""
# cmd = Greeting(None, None, cmd_name='dworld push')
# parsed_args = mock.Mock()
# parsed_args.greeting = 'Hello world!'
# cmd.run(parsed_args)
# out, err = capsys.readouterr()
# assert 'Hello world!' in out
| 25.666667 | 81 | 0.655844 | 64 | 462 | 4.640625 | 0.640625 | 0.10101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00274 | 0.209957 | 462 | 17 | 82 | 27.176471 | 0.810959 | 0.824675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
e1a3e676ffda7283905e72fd2a219c469d9b17cf | 2,115 | py | Python | src/tests/samples.py | BeholdersEye/PyBitmessage | 362a975fbf1ec831d3107c7442527225bc140162 | [
"MIT",
"BSD-2-Clause-FreeBSD"
] | 5 | 2018-03-24T17:33:03.000Z | 2019-07-01T07:16:19.000Z | src/tests/samples.py | BeholdersEye/PyBitmessage | 362a975fbf1ec831d3107c7442527225bc140162 | [
"MIT",
"BSD-2-Clause-FreeBSD"
] | 15 | 2018-03-19T00:04:57.000Z | 2021-12-10T17:21:54.000Z | src/tests/samples.py | BeholdersEye/PyBitmessage | 362a975fbf1ec831d3107c7442527225bc140162 | [
"MIT",
"BSD-2-Clause-FreeBSD"
] | 5 | 2019-05-15T08:42:57.000Z | 2019-09-17T12:21:37.000Z | """Various sample data"""
from binascii import unhexlify
magic = 0xE9BEB4D9
# These keys are from addresses test script
sample_pubsigningkey = unhexlify(
'044a367f049ec16cb6b6118eb734a9962d10b8db59c890cd08f210c43ff08bdf09d'
'16f502ca26cd0713f38988a1237f1fc8fa07b15653c996dc4013af6d15505ce')
sample_pubencryptionkey = unhexlify(
'044597d59177fc1d89555d38915f581b5ff2286b39d022ca0283d2bdd5c36be5d3c'
'e7b9b97792327851a562752e4b79475d1f51f5a71352482b241227f45ed36a9')
sample_privsigningkey = \
b'93d0b61371a54b53df143b954035d612f8efa8a3ed1cf842c2186bfd8f876665'
sample_privencryptionkey = \
b'4b0b73a54e19b059dc274ab69df095fe699f43b17397bca26fdf40f4d7400a3a'
sample_ripe = b'003cd097eb7f35c87b5dc8b4538c22cb55312a9f'
# stream: 1, version: 2
sample_address = 'BM-onkVu1KKL2UaUss5Upg9vXmqd3esTmV79'
sample_factor = 66858749573256452658262553961707680376751171096153613379801854825275240965733
# G * sample_factor
sample_point = (
33567437183004486938355437500683826356288335339807546987348409590129959362313,
94730058721143827257669456336351159718085716196507891067256111928318063085006
)
sample_seed = 'TIGER, tiger, burning bright. In the forests of the night'
# Deterministic addresses with stream 1 and versions 3, 4
sample_deterministic_ripe = b'00cfb69416ae76f68a81c459de4e13460c7d17eb'
sample_deterministic_addr3 = 'BM-2DBPTgeSawWYZceFD69AbDT5q4iUWtj1ZN'
sample_deterministic_addr4 = 'BM-2cWzSnwjJ7yRP3nLEWUV5LisTZyREWSzUK'
sample_daddr3_512 = 18875720106589866286514488037355423395410802084648916523381
sample_daddr4_512 = 25152821841976547050350277460563089811513157529113201589004
sample_statusbar_msg = "new status bar message"
sample_inbox_msg_ids = ['27e644765a3e4b2e973ee7ccf958ea20', '51fc5531-3989-4d69-bbb5-68d64b756f5b',
'2c975c515f8b414db5eea60ba57ba455', 'bc1f2d8a-681c-4cc0-9a12-6067c7e1ac24']
# second address in sample_test_subscription_address is for the announcement broadcast
sample_test_subscription_address = ['BM-2cWQLCBGorT9pUGkYSuGGVr9LzE4mRnQaq', 'BM-GtovgYdgs7qXPkoYaRgrLFuFKz1SFpsw']
sample_subscription_name = 'test sub'
| 47 | 115 | 0.858629 | 148 | 2,115 | 12.047297 | 0.648649 | 0.031969 | 0.024678 | 0.032529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.404886 | 0.090307 | 2,115 | 44 | 116 | 48.068182 | 0.52183 | 0.114894 | 0 | 0 | 0 | 0 | 0.468851 | 0.422127 | 0 | 0 | 0.005371 | 0 | 0 | 1 | 0 | false | 0 | 0.033333 | 0 | 0.033333 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
e1b18b46651e9f66ff6958a9025b0bc1b9f9aca5 | 3,793 | py | Python | capa/features/extractors/ida/extractor.py | pombredanne/capa | b41d23930189c269608d4b705533fa45cf3c064c | [
"Apache-2.0"
] | null | null | null | capa/features/extractors/ida/extractor.py | pombredanne/capa | b41d23930189c269608d4b705533fa45cf3c064c | [
"Apache-2.0"
] | null | null | null | capa/features/extractors/ida/extractor.py | pombredanne/capa | b41d23930189c269608d4b705533fa45cf3c064c | [
"Apache-2.0"
] | null | null | null | # Copyright (C) 2020 FireEye, Inc. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at: [package root]/LICENSE.txt
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
import idaapi
import capa.ida.helpers
import capa.features.extractors.elf
import capa.features.extractors.ida.file
import capa.features.extractors.ida.insn
import capa.features.extractors.ida.global_
import capa.features.extractors.ida.function
import capa.features.extractors.ida.basicblock
from capa.features.extractors.base_extractor import FeatureExtractor
class FunctionHandle:
"""this acts like an idaapi.func_t but with __int__()"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.start_ea
def __getattr__(self, name):
return getattr(self._inner, name)
class BasicBlockHandle:
"""this acts like an idaapi.BasicBlock but with __int__()"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.start_ea
def __getattr__(self, name):
return getattr(self._inner, name)
class InstructionHandle:
"""this acts like an idaapi.insn_t but with __int__()"""
def __init__(self, inner):
self._inner = inner
def __int__(self):
return self.ea
def __getattr__(self, name):
return getattr(self._inner, name)
class IdaFeatureExtractor(FeatureExtractor):
def __init__(self):
super(IdaFeatureExtractor, self).__init__()
self.global_features = []
self.global_features.extend(capa.features.extractors.ida.global_.extract_os())
self.global_features.extend(capa.features.extractors.ida.global_.extract_arch())
def get_base_address(self):
return idaapi.get_imagebase()
def extract_global_features(self):
yield from self.global_features
def extract_file_features(self):
yield from capa.features.extractors.ida.file.extract_features()
def get_functions(self):
import capa.features.extractors.ida.helpers as ida_helpers
# data structure shared across functions yielded here.
# useful for caching analysis relevant across a single workspace.
ctx = {}
# ignore library functions and thunk functions as identified by IDA
for f in ida_helpers.get_functions(skip_thunks=True, skip_libs=True):
setattr(f, "ctx", ctx)
yield FunctionHandle(f)
@staticmethod
def get_function(ea):
f = idaapi.get_func(ea)
setattr(f, "ctx", {})
return FunctionHandle(f)
def extract_function_features(self, f):
yield from capa.features.extractors.ida.function.extract_features(f)
def get_basic_blocks(self, f):
import capa.features.extractors.ida.helpers as ida_helpers
for bb in ida_helpers.get_function_blocks(f):
yield BasicBlockHandle(bb)
def extract_basic_block_features(self, f, bb):
yield from capa.features.extractors.ida.basicblock.extract_features(f, bb)
def get_instructions(self, f, bb):
import capa.features.extractors.ida.helpers as ida_helpers
for insn in ida_helpers.get_instructions_in_range(bb.start_ea, bb.end_ea):
yield InstructionHandle(insn)
def extract_insn_features(self, f, bb, insn):
yield from capa.features.extractors.ida.insn.extract_features(f, bb, insn)
| 33.566372 | 111 | 0.714474 | 501 | 3,793 | 5.177645 | 0.285429 | 0.074017 | 0.135698 | 0.134927 | 0.398612 | 0.293369 | 0.240941 | 0.240941 | 0.240941 | 0.221665 | 0 | 0.00198 | 0.200896 | 3,793 | 112 | 112 | 33.866071 | 0.853844 | 0.237016 | 0 | 0.298507 | 0 | 0 | 0.002091 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.298507 | false | 0 | 0.179104 | 0.104478 | 0.656716 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
e1c0ee31e7c392fd9a301672456d03f86541b8f3 | 265 | py | Python | modules/python3/tests/unittests/scripts/glm.py | ImagiaViz/inviwo | a00bb6b0551bc1cf26dc0366c827c1a557a9603d | [
"BSD-2-Clause"
] | 349 | 2015-01-30T09:21:52.000Z | 2022-03-25T03:10:02.000Z | modules/python3/tests/unittests/scripts/glm.py | ImagiaViz/inviwo | a00bb6b0551bc1cf26dc0366c827c1a557a9603d | [
"BSD-2-Clause"
] | 641 | 2015-09-23T08:54:06.000Z | 2022-03-23T09:50:55.000Z | modules/python3/tests/unittests/scripts/glm.py | ImagiaViz/inviwo | a00bb6b0551bc1cf26dc0366c827c1a557a9603d | [
"BSD-2-Clause"
] | 124 | 2015-02-27T23:45:02.000Z | 2022-02-21T09:37:14.000Z | import inviwopy
from inviwopy.glm import *
v1 = vec3(1,2,3)
v2 = size2_t(4,5)
m1 = mat4(1)
m2 = mat3(0,1,0,-1,0,0,0,0,2)
v3 = m2 * v1
v4 = vec4(1,2,3,4)
w = v4.w
a = v4.a
q = v4.q
z = v4.z
b = v4.b
p = v4.p
y = v4.y
g = v4.g
t = v4.t
x = v4.x
r = v4.r
s = v4.s
| 10.6 | 29 | 0.532075 | 75 | 265 | 1.866667 | 0.44 | 0.042857 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.221106 | 0.249057 | 265 | 24 | 30 | 11.041667 | 0.482412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
bee082fd43d018efe615e1efde05a3a482204b84 | 48,845 | py | Python | demisto_client/demisto_api/models/investigation_playbook_task.py | guytest/demisto-py | 8ca4f56a6177668151b5656cbe675a377003c0e9 | [
"Apache-2.0"
] | 1 | 2020-04-08T14:36:06.000Z | 2020-04-08T14:36:06.000Z | demisto_client/demisto_api/models/investigation_playbook_task.py | guytest/demisto-py | 8ca4f56a6177668151b5656cbe675a377003c0e9 | [
"Apache-2.0"
] | null | null | null | demisto_client/demisto_api/models/investigation_playbook_task.py | guytest/demisto-py | 8ca4f56a6177668151b5656cbe675a377003c0e9 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Demisto API
This is the public REST API to integrate with the demisto server. HTTP request can be sent using any HTTP-client. For an example dedicated client take a look at: https://github.com/demisto/demisto-py. Requests must include API-key that can be generated in the Demisto web client under 'Settings' -> 'Integrations' -> 'API keys' Optimistic Locking and Versioning\\: When using Demisto REST API, you will need to make sure to work on the latest version of the item (incident, entry, etc.), otherwise, you will get a DB version error (which not allow you to override a newer item). In addition, you can pass 'version\\: -1' to force data override (make sure that other users data might be lost). Assume that Alice and Bob both read the same data from Demisto server, then they both changed the data, and then both tried to write the new versions back to the server. Whose changes should be saved? Alice’s? Bob’s? To solve this, each data item in Demisto has a numeric incremental version. If Alice saved an item with version 4 and Bob trying to save the same item with version 3, Demisto will rollback Bob request and returns a DB version conflict error. Bob will need to get the latest item and work on it so Alice work will not get lost. Example request using 'curl'\\: ``` curl 'https://hostname:443/incidents/search' -H 'content-type: application/json' -H 'accept: application/json' -H 'Authorization: <API Key goes here>' --data-binary '{\"filter\":{\"query\":\"-status:closed -category:job\",\"period\":{\"by\":\"day\",\"fromValue\":7}}}' --compressed ``` # noqa: E501
OpenAPI spec version: 2.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
from demisto_client.demisto_api.models.advance_arg import AdvanceArg # noqa: F401,E501
from demisto_client.demisto_api.models.data_collection_form import DataCollectionForm # noqa: F401,E501
from demisto_client.demisto_api.models.evidence_data import EvidenceData # noqa: F401,E501
from demisto_client.demisto_api.models.field_mapping import FieldMapping # noqa: F401,E501
from demisto_client.demisto_api.models.inv_playbook_task_complete_data import InvPlaybookTaskCompleteData # noqa: F401,E501
# from demisto_client.demisto_api.models.investigation_playbook import InvestigationPlaybook # noqa: F401,E501
from demisto_client.demisto_api.models.notifiable_item import NotifiableItem # noqa: F401,E501
from demisto_client.demisto_api.models.reputation_calc_alg import ReputationCalcAlg # noqa: F401,E501
from demisto_client.demisto_api.models.sla import SLA # noqa: F401,E501
from demisto_client.demisto_api.models.task import Task # noqa: F401,E501
from demisto_client.demisto_api.models.task_condition import TaskCondition # noqa: F401,E501
from demisto_client.demisto_api.models.task_loop import TaskLoop # noqa: F401,E501
from demisto_client.demisto_api.models.task_state import TaskState # noqa: F401,E501
from demisto_client.demisto_api.models.task_type import TaskType # noqa: F401,E501
from demisto_client.demisto_api.models.task_view import TaskView # noqa: F401,E501
from demisto_client.demisto_api.models.timer_trigger import TimerTrigger # noqa: F401,E501
class InvestigationPlaybookTask(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'arguments': 'dict(str, object)',
'assignee': 'str',
'assignee_set': 'bool',
'blocking_tasks': 'list[str]',
'comments': 'bool',
'completed_by': 'str',
'completed_count': 'int',
'completed_date': 'datetime',
'conditions': 'list[TaskCondition]',
'continue_on_error': 'bool',
'default_assignee': 'str',
'default_assignee_complex': 'AdvanceArg',
'default_reminder': 'int',
'due_date': 'datetime',
'due_date_set': 'bool',
'entries': 'list[str]',
'evidence_data': 'EvidenceData',
'execution_count': 'int',
'field_mapping': 'list[FieldMapping]',
'for_each_index': 'int',
'for_each_inputs': 'dict(str, list[object])',
'form': 'DataCollectionForm',
'id': 'str',
'ignore_worker': 'bool',
'indent': 'int',
'input': 'str',
'loop': 'TaskLoop',
'message': 'NotifiableItem',
'next_tasks': 'dict(str, list[str])',
'note': 'bool',
'outputs': 'dict(str, object)',
'parent_block_count': 'int',
'parent_playbook_id': 'str',
'patched': 'bool',
'playbook_inputs': 'dict(str, object)',
'previous_tasks': 'dict(str, list[str])',
'reminder': 'int',
'reputation_calc': 'ReputationCalcAlg',
'restricted_completion': 'bool',
'script_arguments': 'dict(str, AdvanceArg)',
'separate_context': 'bool',
'sla': 'SLA',
'sla_reminder': 'SLA',
'start_date': 'datetime',
'state': 'TaskState',
'sub_playbook': 'InvestigationPlaybook',
'task': 'Task',
'task_complete_data': 'list[InvPlaybookTaskCompleteData]',
'task_id': 'str',
'timer_triggers': 'list[TimerTrigger]',
'type': 'TaskType',
'view': 'TaskView',
'will_not_execute_count': 'int'
}
attribute_map = {
'arguments': 'arguments',
'assignee': 'assignee',
'assignee_set': 'assigneeSet',
'blocking_tasks': 'blockingTasks',
'comments': 'comments',
'completed_by': 'completedBy',
'completed_count': 'completedCount',
'completed_date': 'completedDate',
'conditions': 'conditions',
'continue_on_error': 'continueOnError',
'default_assignee': 'defaultAssignee',
'default_assignee_complex': 'defaultAssigneeComplex',
'default_reminder': 'defaultReminder',
'due_date': 'dueDate',
'due_date_set': 'dueDateSet',
'entries': 'entries',
'evidence_data': 'evidenceData',
'execution_count': 'executionCount',
'field_mapping': 'fieldMapping',
'for_each_index': 'forEachIndex',
'for_each_inputs': 'forEachInputs',
'form': 'form',
'id': 'id',
'ignore_worker': 'ignoreWorker',
'indent': 'indent',
'input': 'input',
'loop': 'loop',
'message': 'message',
'next_tasks': 'nextTasks',
'note': 'note',
'outputs': 'outputs',
'parent_block_count': 'parentBlockCount',
'parent_playbook_id': 'parentPlaybookID',
'patched': 'patched',
'playbook_inputs': 'playbookInputs',
'previous_tasks': 'previousTasks',
'reminder': 'reminder',
'reputation_calc': 'reputationCalc',
'restricted_completion': 'restrictedCompletion',
'script_arguments': 'scriptArguments',
'separate_context': 'separateContext',
'sla': 'sla',
'sla_reminder': 'slaReminder',
'start_date': 'startDate',
'state': 'state',
'sub_playbook': 'subPlaybook',
'task': 'task',
'task_complete_data': 'taskCompleteData',
'task_id': 'taskId',
'timer_triggers': 'timerTriggers',
'type': 'type',
'view': 'view',
'will_not_execute_count': 'willNotExecuteCount'
}
def __init__(self, arguments=None, assignee=None, assignee_set=None, blocking_tasks=None, comments=None, completed_by=None, completed_count=None, completed_date=None, conditions=None, continue_on_error=None, default_assignee=None, default_assignee_complex=None, default_reminder=None, due_date=None, due_date_set=None, entries=None, evidence_data=None, execution_count=None, field_mapping=None, for_each_index=None, for_each_inputs=None, form=None, id=None, ignore_worker=None, indent=None, input=None, loop=None, message=None, next_tasks=None, note=None, outputs=None, parent_block_count=None, parent_playbook_id=None, patched=None, playbook_inputs=None, previous_tasks=None, reminder=None, reputation_calc=None, restricted_completion=None, script_arguments=None, separate_context=None, sla=None, sla_reminder=None, start_date=None, state=None, sub_playbook=None, task=None, task_complete_data=None, task_id=None, timer_triggers=None, type=None, view=None, will_not_execute_count=None): # noqa: E501
"""InvestigationPlaybookTask - a model defined in Swagger""" # noqa: E501
self._arguments = None
self._assignee = None
self._assignee_set = None
self._blocking_tasks = None
self._comments = None
self._completed_by = None
self._completed_count = None
self._completed_date = None
self._conditions = None
self._continue_on_error = None
self._default_assignee = None
self._default_assignee_complex = None
self._default_reminder = None
self._due_date = None
self._due_date_set = None
self._entries = None
self._evidence_data = None
self._execution_count = None
self._field_mapping = None
self._for_each_index = None
self._for_each_inputs = None
self._form = None
self._id = None
self._ignore_worker = None
self._indent = None
self._input = None
self._loop = None
self._message = None
self._next_tasks = None
self._note = None
self._outputs = None
self._parent_block_count = None
self._parent_playbook_id = None
self._patched = None
self._playbook_inputs = None
self._previous_tasks = None
self._reminder = None
self._reputation_calc = None
self._restricted_completion = None
self._script_arguments = None
self._separate_context = None
self._sla = None
self._sla_reminder = None
self._start_date = None
self._state = None
self._sub_playbook = None
self._task = None
self._task_complete_data = None
self._task_id = None
self._timer_triggers = None
self._type = None
self._view = None
self._will_not_execute_count = None
self.discriminator = None
if arguments is not None:
self.arguments = arguments
if assignee is not None:
self.assignee = assignee
if assignee_set is not None:
self.assignee_set = assignee_set
if blocking_tasks is not None:
self.blocking_tasks = blocking_tasks
if comments is not None:
self.comments = comments
if completed_by is not None:
self.completed_by = completed_by
if completed_count is not None:
self.completed_count = completed_count
if completed_date is not None:
self.completed_date = completed_date
if conditions is not None:
self.conditions = conditions
if continue_on_error is not None:
self.continue_on_error = continue_on_error
if default_assignee is not None:
self.default_assignee = default_assignee
if default_assignee_complex is not None:
self.default_assignee_complex = default_assignee_complex
if default_reminder is not None:
self.default_reminder = default_reminder
if due_date is not None:
self.due_date = due_date
if due_date_set is not None:
self.due_date_set = due_date_set
if entries is not None:
self.entries = entries
if evidence_data is not None:
self.evidence_data = evidence_data
if execution_count is not None:
self.execution_count = execution_count
if field_mapping is not None:
self.field_mapping = field_mapping
if for_each_index is not None:
self.for_each_index = for_each_index
if for_each_inputs is not None:
self.for_each_inputs = for_each_inputs
if form is not None:
self.form = form
if id is not None:
self.id = id
if ignore_worker is not None:
self.ignore_worker = ignore_worker
if indent is not None:
self.indent = indent
if input is not None:
self.input = input
if loop is not None:
self.loop = loop
if message is not None:
self.message = message
if next_tasks is not None:
self.next_tasks = next_tasks
if note is not None:
self.note = note
if outputs is not None:
self.outputs = outputs
if parent_block_count is not None:
self.parent_block_count = parent_block_count
if parent_playbook_id is not None:
self.parent_playbook_id = parent_playbook_id
if patched is not None:
self.patched = patched
if playbook_inputs is not None:
self.playbook_inputs = playbook_inputs
if previous_tasks is not None:
self.previous_tasks = previous_tasks
if reminder is not None:
self.reminder = reminder
if reputation_calc is not None:
self.reputation_calc = reputation_calc
if restricted_completion is not None:
self.restricted_completion = restricted_completion
if script_arguments is not None:
self.script_arguments = script_arguments
if separate_context is not None:
self.separate_context = separate_context
if sla is not None:
self.sla = sla
if sla_reminder is not None:
self.sla_reminder = sla_reminder
if start_date is not None:
self.start_date = start_date
if state is not None:
self.state = state
if sub_playbook is not None:
self.sub_playbook = sub_playbook
if task is not None:
self.task = task
if task_complete_data is not None:
self.task_complete_data = task_complete_data
if task_id is not None:
self.task_id = task_id
if timer_triggers is not None:
self.timer_triggers = timer_triggers
if type is not None:
self.type = type
if view is not None:
self.view = view
if will_not_execute_count is not None:
self.will_not_execute_count = will_not_execute_count
@property
def arguments(self):
"""Gets the arguments of this InvestigationPlaybookTask. # noqa: E501
:return: The arguments of this InvestigationPlaybookTask. # noqa: E501
:rtype: dict(str, object)
"""
return self._arguments
@arguments.setter
def arguments(self, arguments):
"""Sets the arguments of this InvestigationPlaybookTask.
:param arguments: The arguments of this InvestigationPlaybookTask. # noqa: E501
:type: dict(str, object)
"""
self._arguments = arguments
@property
def assignee(self):
"""Gets the assignee of this InvestigationPlaybookTask. # noqa: E501
:return: The assignee of this InvestigationPlaybookTask. # noqa: E501
:rtype: str
"""
return self._assignee
@assignee.setter
def assignee(self, assignee):
"""Sets the assignee of this InvestigationPlaybookTask.
:param assignee: The assignee of this InvestigationPlaybookTask. # noqa: E501
:type: str
"""
self._assignee = assignee
@property
def assignee_set(self):
"""Gets the assignee_set of this InvestigationPlaybookTask. # noqa: E501
:return: The assignee_set of this InvestigationPlaybookTask. # noqa: E501
:rtype: bool
"""
return self._assignee_set
@assignee_set.setter
def assignee_set(self, assignee_set):
"""Sets the assignee_set of this InvestigationPlaybookTask.
:param assignee_set: The assignee_set of this InvestigationPlaybookTask. # noqa: E501
:type: bool
"""
self._assignee_set = assignee_set
@property
def blocking_tasks(self):
"""Gets the blocking_tasks of this InvestigationPlaybookTask. # noqa: E501
:return: The blocking_tasks of this InvestigationPlaybookTask. # noqa: E501
:rtype: list[str]
"""
return self._blocking_tasks
@blocking_tasks.setter
def blocking_tasks(self, blocking_tasks):
"""Sets the blocking_tasks of this InvestigationPlaybookTask.
:param blocking_tasks: The blocking_tasks of this InvestigationPlaybookTask. # noqa: E501
:type: list[str]
"""
self._blocking_tasks = blocking_tasks
@property
def comments(self):
"""Gets the comments of this InvestigationPlaybookTask. # noqa: E501
Whether this task had any comments or not # noqa: E501
:return: The comments of this InvestigationPlaybookTask. # noqa: E501
:rtype: bool
"""
return self._comments
@comments.setter
def comments(self, comments):
"""Sets the comments of this InvestigationPlaybookTask.
Whether this task had any comments or not # noqa: E501
:param comments: The comments of this InvestigationPlaybookTask. # noqa: E501
:type: bool
"""
self._comments = comments
@property
def completed_by(self):
"""Gets the completed_by of this InvestigationPlaybookTask. # noqa: E501
:return: The completed_by of this InvestigationPlaybookTask. # noqa: E501
:rtype: str
"""
return self._completed_by
@completed_by.setter
def completed_by(self, completed_by):
"""Sets the completed_by of this InvestigationPlaybookTask.
:param completed_by: The completed_by of this InvestigationPlaybookTask. # noqa: E501
:type: str
"""
self._completed_by = completed_by
@property
def completed_count(self):
"""Gets the completed_count of this InvestigationPlaybookTask. # noqa: E501
:return: The completed_count of this InvestigationPlaybookTask. # noqa: E501
:rtype: int
"""
return self._completed_count
@completed_count.setter
def completed_count(self, completed_count):
"""Sets the completed_count of this InvestigationPlaybookTask.
:param completed_count: The completed_count of this InvestigationPlaybookTask. # noqa: E501
:type: int
"""
self._completed_count = completed_count
@property
def completed_date(self):
"""Gets the completed_date of this InvestigationPlaybookTask. # noqa: E501
:return: The completed_date of this InvestigationPlaybookTask. # noqa: E501
:rtype: datetime
"""
return self._completed_date
@completed_date.setter
def completed_date(self, completed_date):
"""Sets the completed_date of this InvestigationPlaybookTask.
:param completed_date: The completed_date of this InvestigationPlaybookTask. # noqa: E501
:type: datetime
"""
self._completed_date = completed_date
@property
def conditions(self):
"""Gets the conditions of this InvestigationPlaybookTask. # noqa: E501
Conditions - optional list of conditions to run when task is conditional. we check conditions by their order (e.i. - considering the first one that satisfied) # noqa: E501
:return: The conditions of this InvestigationPlaybookTask. # noqa: E501
:rtype: list[TaskCondition]
"""
return self._conditions
@conditions.setter
def conditions(self, conditions):
"""Sets the conditions of this InvestigationPlaybookTask.
Conditions - optional list of conditions to run when task is conditional. we check conditions by their order (e.i. - considering the first one that satisfied) # noqa: E501
:param conditions: The conditions of this InvestigationPlaybookTask. # noqa: E501
:type: list[TaskCondition]
"""
self._conditions = conditions
@property
def continue_on_error(self):
"""Gets the continue_on_error of this InvestigationPlaybookTask. # noqa: E501
:return: The continue_on_error of this InvestigationPlaybookTask. # noqa: E501
:rtype: bool
"""
return self._continue_on_error
@continue_on_error.setter
def continue_on_error(self, continue_on_error):
"""Sets the continue_on_error of this InvestigationPlaybookTask.
:param continue_on_error: The continue_on_error of this InvestigationPlaybookTask. # noqa: E501
:type: bool
"""
self._continue_on_error = continue_on_error
@property
def default_assignee(self):
"""Gets the default_assignee of this InvestigationPlaybookTask. # noqa: E501
:return: The default_assignee of this InvestigationPlaybookTask. # noqa: E501
:rtype: str
"""
return self._default_assignee
@default_assignee.setter
def default_assignee(self, default_assignee):
"""Sets the default_assignee of this InvestigationPlaybookTask.
:param default_assignee: The default_assignee of this InvestigationPlaybookTask. # noqa: E501
:type: str
"""
self._default_assignee = default_assignee
@property
def default_assignee_complex(self):
"""Gets the default_assignee_complex of this InvestigationPlaybookTask. # noqa: E501
:return: The default_assignee_complex of this InvestigationPlaybookTask. # noqa: E501
:rtype: AdvanceArg
"""
return self._default_assignee_complex
@default_assignee_complex.setter
def default_assignee_complex(self, default_assignee_complex):
"""Sets the default_assignee_complex of this InvestigationPlaybookTask.
:param default_assignee_complex: The default_assignee_complex of this InvestigationPlaybookTask. # noqa: E501
:type: AdvanceArg
"""
self._default_assignee_complex = default_assignee_complex
@property
def default_reminder(self):
"""Gets the default_reminder of this InvestigationPlaybookTask. # noqa: E501
:return: The default_reminder of this InvestigationPlaybookTask. # noqa: E501
:rtype: int
"""
return self._default_reminder
@default_reminder.setter
def default_reminder(self, default_reminder):
"""Sets the default_reminder of this InvestigationPlaybookTask.
:param default_reminder: The default_reminder of this InvestigationPlaybookTask. # noqa: E501
:type: int
"""
self._default_reminder = default_reminder
@property
def due_date(self):
"""Gets the due_date of this InvestigationPlaybookTask. # noqa: E501
:return: The due_date of this InvestigationPlaybookTask. # noqa: E501
:rtype: datetime
"""
return self._due_date
@due_date.setter
def due_date(self, due_date):
"""Sets the due_date of this InvestigationPlaybookTask.
:param due_date: The due_date of this InvestigationPlaybookTask. # noqa: E501
:type: datetime
"""
self._due_date = due_date
@property
def due_date_set(self):
"""Gets the due_date_set of this InvestigationPlaybookTask. # noqa: E501
:return: The due_date_set of this InvestigationPlaybookTask. # noqa: E501
:rtype: bool
"""
return self._due_date_set
@due_date_set.setter
def due_date_set(self, due_date_set):
"""Sets the due_date_set of this InvestigationPlaybookTask.
:param due_date_set: The due_date_set of this InvestigationPlaybookTask. # noqa: E501
:type: bool
"""
self._due_date_set = due_date_set
@property
def entries(self):
"""Gets the entries of this InvestigationPlaybookTask. # noqa: E501
:return: The entries of this InvestigationPlaybookTask. # noqa: E501
:rtype: list[str]
"""
return self._entries
@entries.setter
def entries(self, entries):
"""Sets the entries of this InvestigationPlaybookTask.
:param entries: The entries of this InvestigationPlaybookTask. # noqa: E501
:type: list[str]
"""
self._entries = entries
@property
def evidence_data(self):
"""Gets the evidence_data of this InvestigationPlaybookTask. # noqa: E501
:return: The evidence_data of this InvestigationPlaybookTask. # noqa: E501
:rtype: EvidenceData
"""
return self._evidence_data
@evidence_data.setter
def evidence_data(self, evidence_data):
"""Sets the evidence_data of this InvestigationPlaybookTask.
:param evidence_data: The evidence_data of this InvestigationPlaybookTask. # noqa: E501
:type: EvidenceData
"""
self._evidence_data = evidence_data
@property
def execution_count(self):
"""Gets the execution_count of this InvestigationPlaybookTask. # noqa: E501
:return: The execution_count of this InvestigationPlaybookTask. # noqa: E501
:rtype: int
"""
return self._execution_count
@execution_count.setter
def execution_count(self, execution_count):
"""Sets the execution_count of this InvestigationPlaybookTask.
:param execution_count: The execution_count of this InvestigationPlaybookTask. # noqa: E501
:type: int
"""
self._execution_count = execution_count
@property
def field_mapping(self):
"""Gets the field_mapping of this InvestigationPlaybookTask. # noqa: E501
:return: The field_mapping of this InvestigationPlaybookTask. # noqa: E501
:rtype: list[FieldMapping]
"""
return self._field_mapping
@field_mapping.setter
def field_mapping(self, field_mapping):
"""Sets the field_mapping of this InvestigationPlaybookTask.
:param field_mapping: The field_mapping of this InvestigationPlaybookTask. # noqa: E501
:type: list[FieldMapping]
"""
self._field_mapping = field_mapping
@property
def for_each_index(self):
"""Gets the for_each_index of this InvestigationPlaybookTask. # noqa: E501
Parameters needed for loops # noqa: E501
:return: The for_each_index of this InvestigationPlaybookTask. # noqa: E501
:rtype: int
"""
return self._for_each_index
@for_each_index.setter
def for_each_index(self, for_each_index):
"""Sets the for_each_index of this InvestigationPlaybookTask.
Parameters needed for loops # noqa: E501
:param for_each_index: The for_each_index of this InvestigationPlaybookTask. # noqa: E501
:type: int
"""
self._for_each_index = for_each_index
@property
def for_each_inputs(self):
"""Gets the for_each_inputs of this InvestigationPlaybookTask. # noqa: E501
:return: The for_each_inputs of this InvestigationPlaybookTask. # noqa: E501
:rtype: dict(str, list[object])
"""
return self._for_each_inputs
@for_each_inputs.setter
def for_each_inputs(self, for_each_inputs):
"""Sets the for_each_inputs of this InvestigationPlaybookTask.
:param for_each_inputs: The for_each_inputs of this InvestigationPlaybookTask. # noqa: E501
:type: dict(str, list[object])
"""
self._for_each_inputs = for_each_inputs
@property
def form(self):
"""Gets the form of this InvestigationPlaybookTask. # noqa: E501
:return: The form of this InvestigationPlaybookTask. # noqa: E501
:rtype: DataCollectionForm
"""
return self._form
@form.setter
def form(self, form):
"""Sets the form of this InvestigationPlaybookTask.
:param form: The form of this InvestigationPlaybookTask. # noqa: E501
:type: DataCollectionForm
"""
self._form = form
@property
def id(self):
"""Gets the id of this InvestigationPlaybookTask. # noqa: E501
:return: The id of this InvestigationPlaybookTask. # noqa: E501
:rtype: str
"""
return self._id
@id.setter
def id(self, id):
"""Sets the id of this InvestigationPlaybookTask.
:param id: The id of this InvestigationPlaybookTask. # noqa: E501
:type: str
"""
self._id = id
@property
def ignore_worker(self):
"""Gets the ignore_worker of this InvestigationPlaybookTask. # noqa: E501
Do not run this task in a worker # noqa: E501
:return: The ignore_worker of this InvestigationPlaybookTask. # noqa: E501
:rtype: bool
"""
return self._ignore_worker
@ignore_worker.setter
def ignore_worker(self, ignore_worker):
"""Sets the ignore_worker of this InvestigationPlaybookTask.
Do not run this task in a worker # noqa: E501
:param ignore_worker: The ignore_worker of this InvestigationPlaybookTask. # noqa: E501
:type: bool
"""
self._ignore_worker = ignore_worker
@property
def indent(self):
"""Gets the indent of this InvestigationPlaybookTask. # noqa: E501
:return: The indent of this InvestigationPlaybookTask. # noqa: E501
:rtype: int
"""
return self._indent
@indent.setter
def indent(self, indent):
"""Sets the indent of this InvestigationPlaybookTask.
:param indent: The indent of this InvestigationPlaybookTask. # noqa: E501
:type: int
"""
self._indent = indent
@property
def input(self):
"""Gets the input of this InvestigationPlaybookTask. # noqa: E501
:return: The input of this InvestigationPlaybookTask. # noqa: E501
:rtype: str
"""
return self._input
@input.setter
def input(self, input):
"""Sets the input of this InvestigationPlaybookTask.
:param input: The input of this InvestigationPlaybookTask. # noqa: E501
:type: str
"""
self._input = input
@property
def loop(self):
"""Gets the loop of this InvestigationPlaybookTask. # noqa: E501
:return: The loop of this InvestigationPlaybookTask. # noqa: E501
:rtype: TaskLoop
"""
return self._loop
@loop.setter
def loop(self, loop):
"""Sets the loop of this InvestigationPlaybookTask.
:param loop: The loop of this InvestigationPlaybookTask. # noqa: E501
:type: TaskLoop
"""
self._loop = loop
@property
def message(self):
"""Gets the message of this InvestigationPlaybookTask. # noqa: E501
:return: The message of this InvestigationPlaybookTask. # noqa: E501
:rtype: NotifiableItem
"""
return self._message
@message.setter
def message(self, message):
"""Sets the message of this InvestigationPlaybookTask.
:param message: The message of this InvestigationPlaybookTask. # noqa: E501
:type: NotifiableItem
"""
self._message = message
@property
def next_tasks(self):
"""Gets the next_tasks of this InvestigationPlaybookTask. # noqa: E501
:return: The next_tasks of this InvestigationPlaybookTask. # noqa: E501
:rtype: dict(str, list[str])
"""
return self._next_tasks
@next_tasks.setter
def next_tasks(self, next_tasks):
"""Sets the next_tasks of this InvestigationPlaybookTask.
:param next_tasks: The next_tasks of this InvestigationPlaybookTask. # noqa: E501
:type: dict(str, list[str])
"""
self._next_tasks = next_tasks
@property
def note(self):
"""Gets the note of this InvestigationPlaybookTask. # noqa: E501
:return: The note of this InvestigationPlaybookTask. # noqa: E501
:rtype: bool
"""
return self._note
@note.setter
def note(self, note):
"""Sets the note of this InvestigationPlaybookTask.
:param note: The note of this InvestigationPlaybookTask. # noqa: E501
:type: bool
"""
self._note = note
@property
def outputs(self):
"""Gets the outputs of this InvestigationPlaybookTask. # noqa: E501
:return: The outputs of this InvestigationPlaybookTask. # noqa: E501
:rtype: dict(str, object)
"""
return self._outputs
@outputs.setter
def outputs(self, outputs):
"""Sets the outputs of this InvestigationPlaybookTask.
:param outputs: The outputs of this InvestigationPlaybookTask. # noqa: E501
:type: dict(str, object)
"""
self._outputs = outputs
@property
def parent_block_count(self):
"""Gets the parent_block_count of this InvestigationPlaybookTask. # noqa: E501
the number of tasks that are waiting on blocked in subplaybooks of this task # noqa: E501
:return: The parent_block_count of this InvestigationPlaybookTask. # noqa: E501
:rtype: int
"""
return self._parent_block_count
@parent_block_count.setter
def parent_block_count(self, parent_block_count):
"""Sets the parent_block_count of this InvestigationPlaybookTask.
the number of tasks that are waiting on blocked in subplaybooks of this task # noqa: E501
:param parent_block_count: The parent_block_count of this InvestigationPlaybookTask. # noqa: E501
:type: int
"""
self._parent_block_count = parent_block_count
@property
def parent_playbook_id(self):
"""Gets the parent_playbook_id of this InvestigationPlaybookTask. # noqa: E501
:return: The parent_playbook_id of this InvestigationPlaybookTask. # noqa: E501
:rtype: str
"""
return self._parent_playbook_id
@parent_playbook_id.setter
def parent_playbook_id(self, parent_playbook_id):
"""Sets the parent_playbook_id of this InvestigationPlaybookTask.
:param parent_playbook_id: The parent_playbook_id of this InvestigationPlaybookTask. # noqa: E501
:type: str
"""
self._parent_playbook_id = parent_playbook_id
@property
def patched(self):
"""Gets the patched of this InvestigationPlaybookTask. # noqa: E501
Indicates whether this task was patched to InvPB and did not originally belong to the playbook # noqa: E501
:return: The patched of this InvestigationPlaybookTask. # noqa: E501
:rtype: bool
"""
return self._patched
@patched.setter
def patched(self, patched):
"""Sets the patched of this InvestigationPlaybookTask.
Indicates whether this task was patched to InvPB and did not originally belong to the playbook # noqa: E501
:param patched: The patched of this InvestigationPlaybookTask. # noqa: E501
:type: bool
"""
self._patched = patched
@property
def playbook_inputs(self):
"""Gets the playbook_inputs of this InvestigationPlaybookTask. # noqa: E501
:return: The playbook_inputs of this InvestigationPlaybookTask. # noqa: E501
:rtype: dict(str, object)
"""
return self._playbook_inputs
@playbook_inputs.setter
def playbook_inputs(self, playbook_inputs):
"""Sets the playbook_inputs of this InvestigationPlaybookTask.
:param playbook_inputs: The playbook_inputs of this InvestigationPlaybookTask. # noqa: E501
:type: dict(str, object)
"""
self._playbook_inputs = playbook_inputs
@property
def previous_tasks(self):
"""Gets the previous_tasks of this InvestigationPlaybookTask. # noqa: E501
:return: The previous_tasks of this InvestigationPlaybookTask. # noqa: E501
:rtype: dict(str, list[str])
"""
return self._previous_tasks
@previous_tasks.setter
def previous_tasks(self, previous_tasks):
"""Sets the previous_tasks of this InvestigationPlaybookTask.
:param previous_tasks: The previous_tasks of this InvestigationPlaybookTask. # noqa: E501
:type: dict(str, list[str])
"""
self._previous_tasks = previous_tasks
@property
def reminder(self):
"""Gets the reminder of this InvestigationPlaybookTask. # noqa: E501
Duration in minutes, this field is not persisted here # noqa: E501
:return: The reminder of this InvestigationPlaybookTask. # noqa: E501
:rtype: int
"""
return self._reminder
@reminder.setter
def reminder(self, reminder):
"""Sets the reminder of this InvestigationPlaybookTask.
Duration in minutes, this field is not persisted here # noqa: E501
:param reminder: The reminder of this InvestigationPlaybookTask. # noqa: E501
:type: int
"""
self._reminder = reminder
@property
def reputation_calc(self):
"""Gets the reputation_calc of this InvestigationPlaybookTask. # noqa: E501
:return: The reputation_calc of this InvestigationPlaybookTask. # noqa: E501
:rtype: ReputationCalcAlg
"""
return self._reputation_calc
@reputation_calc.setter
def reputation_calc(self, reputation_calc):
"""Sets the reputation_calc of this InvestigationPlaybookTask.
:param reputation_calc: The reputation_calc of this InvestigationPlaybookTask. # noqa: E501
:type: ReputationCalcAlg
"""
self._reputation_calc = reputation_calc
@property
def restricted_completion(self):
"""Gets the restricted_completion of this InvestigationPlaybookTask. # noqa: E501
:return: The restricted_completion of this InvestigationPlaybookTask. # noqa: E501
:rtype: bool
"""
return self._restricted_completion
@restricted_completion.setter
def restricted_completion(self, restricted_completion):
"""Sets the restricted_completion of this InvestigationPlaybookTask.
:param restricted_completion: The restricted_completion of this InvestigationPlaybookTask. # noqa: E501
:type: bool
"""
self._restricted_completion = restricted_completion
@property
def script_arguments(self):
"""Gets the script_arguments of this InvestigationPlaybookTask. # noqa: E501
:return: The script_arguments of this InvestigationPlaybookTask. # noqa: E501
:rtype: dict(str, AdvanceArg)
"""
return self._script_arguments
@script_arguments.setter
def script_arguments(self, script_arguments):
"""Sets the script_arguments of this InvestigationPlaybookTask.
:param script_arguments: The script_arguments of this InvestigationPlaybookTask. # noqa: E501
:type: dict(str, AdvanceArg)
"""
self._script_arguments = script_arguments
@property
def separate_context(self):
"""Gets the separate_context of this InvestigationPlaybookTask. # noqa: E501
:return: The separate_context of this InvestigationPlaybookTask. # noqa: E501
:rtype: bool
"""
return self._separate_context
@separate_context.setter
def separate_context(self, separate_context):
"""Sets the separate_context of this InvestigationPlaybookTask.
:param separate_context: The separate_context of this InvestigationPlaybookTask. # noqa: E501
:type: bool
"""
self._separate_context = separate_context
@property
def sla(self):
"""Gets the sla of this InvestigationPlaybookTask. # noqa: E501
:return: The sla of this InvestigationPlaybookTask. # noqa: E501
:rtype: SLA
"""
return self._sla
@sla.setter
def sla(self, sla):
"""Sets the sla of this InvestigationPlaybookTask.
:param sla: The sla of this InvestigationPlaybookTask. # noqa: E501
:type: SLA
"""
self._sla = sla
@property
def sla_reminder(self):
"""Gets the sla_reminder of this InvestigationPlaybookTask. # noqa: E501
:return: The sla_reminder of this InvestigationPlaybookTask. # noqa: E501
:rtype: SLA
"""
return self._sla_reminder
@sla_reminder.setter
def sla_reminder(self, sla_reminder):
"""Sets the sla_reminder of this InvestigationPlaybookTask.
:param sla_reminder: The sla_reminder of this InvestigationPlaybookTask. # noqa: E501
:type: SLA
"""
self._sla_reminder = sla_reminder
@property
def start_date(self):
"""Gets the start_date of this InvestigationPlaybookTask. # noqa: E501
:return: The start_date of this InvestigationPlaybookTask. # noqa: E501
:rtype: datetime
"""
return self._start_date
@start_date.setter
def start_date(self, start_date):
"""Sets the start_date of this InvestigationPlaybookTask.
:param start_date: The start_date of this InvestigationPlaybookTask. # noqa: E501
:type: datetime
"""
self._start_date = start_date
@property
def state(self):
"""Gets the state of this InvestigationPlaybookTask. # noqa: E501
:return: The state of this InvestigationPlaybookTask. # noqa: E501
:rtype: TaskState
"""
return self._state
@state.setter
def state(self, state):
"""Sets the state of this InvestigationPlaybookTask.
:param state: The state of this InvestigationPlaybookTask. # noqa: E501
:type: TaskState
"""
self._state = state
@property
def sub_playbook(self):
"""Gets the sub_playbook of this InvestigationPlaybookTask. # noqa: E501
:return: The sub_playbook of this InvestigationPlaybookTask. # noqa: E501
:rtype: InvestigationPlaybook
"""
return self._sub_playbook
@sub_playbook.setter
def sub_playbook(self, sub_playbook):
"""Sets the sub_playbook of this InvestigationPlaybookTask.
:param sub_playbook: The sub_playbook of this InvestigationPlaybookTask. # noqa: E501
:type: InvestigationPlaybook
"""
self._sub_playbook = sub_playbook
@property
def task(self):
"""Gets the task of this InvestigationPlaybookTask. # noqa: E501
:return: The task of this InvestigationPlaybookTask. # noqa: E501
:rtype: Task
"""
return self._task
@task.setter
def task(self, task):
"""Sets the task of this InvestigationPlaybookTask.
:param task: The task of this InvestigationPlaybookTask. # noqa: E501
:type: Task
"""
self._task = task
@property
def task_complete_data(self):
"""Gets the task_complete_data of this InvestigationPlaybookTask. # noqa: E501
History complete data # noqa: E501
:return: The task_complete_data of this InvestigationPlaybookTask. # noqa: E501
:rtype: list[InvPlaybookTaskCompleteData]
"""
return self._task_complete_data
@task_complete_data.setter
def task_complete_data(self, task_complete_data):
"""Sets the task_complete_data of this InvestigationPlaybookTask.
History complete data # noqa: E501
:param task_complete_data: The task_complete_data of this InvestigationPlaybookTask. # noqa: E501
:type: list[InvPlaybookTaskCompleteData]
"""
self._task_complete_data = task_complete_data
@property
def task_id(self):
"""Gets the task_id of this InvestigationPlaybookTask. # noqa: E501
:return: The task_id of this InvestigationPlaybookTask. # noqa: E501
:rtype: str
"""
return self._task_id
@task_id.setter
def task_id(self, task_id):
"""Sets the task_id of this InvestigationPlaybookTask.
:param task_id: The task_id of this InvestigationPlaybookTask. # noqa: E501
:type: str
"""
self._task_id = task_id
@property
def timer_triggers(self):
"""Gets the timer_triggers of this InvestigationPlaybookTask. # noqa: E501
SLA fields # noqa: E501
:return: The timer_triggers of this InvestigationPlaybookTask. # noqa: E501
:rtype: list[TimerTrigger]
"""
return self._timer_triggers
@timer_triggers.setter
def timer_triggers(self, timer_triggers):
"""Sets the timer_triggers of this InvestigationPlaybookTask.
SLA fields # noqa: E501
:param timer_triggers: The timer_triggers of this InvestigationPlaybookTask. # noqa: E501
:type: list[TimerTrigger]
"""
self._timer_triggers = timer_triggers
@property
def type(self):
"""Gets the type of this InvestigationPlaybookTask. # noqa: E501
:return: The type of this InvestigationPlaybookTask. # noqa: E501
:rtype: TaskType
"""
return self._type
@type.setter
def type(self, type):
"""Sets the type of this InvestigationPlaybookTask.
:param type: The type of this InvestigationPlaybookTask. # noqa: E501
:type: TaskType
"""
self._type = type
@property
def view(self):
"""Gets the view of this InvestigationPlaybookTask. # noqa: E501
:return: The view of this InvestigationPlaybookTask. # noqa: E501
:rtype: TaskView
"""
return self._view
@view.setter
def view(self, view):
"""Sets the view of this InvestigationPlaybookTask.
:param view: The view of this InvestigationPlaybookTask. # noqa: E501
:type: TaskView
"""
self._view = view
@property
def will_not_execute_count(self):
"""Gets the will_not_execute_count of this InvestigationPlaybookTask. # noqa: E501
:return: The will_not_execute_count of this InvestigationPlaybookTask. # noqa: E501
:rtype: int
"""
return self._will_not_execute_count
@will_not_execute_count.setter
def will_not_execute_count(self, will_not_execute_count):
"""Sets the will_not_execute_count of this InvestigationPlaybookTask.
:param will_not_execute_count: The will_not_execute_count of this InvestigationPlaybookTask. # noqa: E501
:type: int
"""
self._will_not_execute_count = will_not_execute_count
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(InvestigationPlaybookTask, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, InvestigationPlaybookTask):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 32.498337 | 1,584 | 0.649381 | 5,447 | 48,845 | 5.627318 | 0.067193 | 0.04189 | 0.214407 | 0.181554 | 0.608019 | 0.47377 | 0.423561 | 0.325851 | 0.222074 | 0.106127 | 0 | 0.018313 | 0.269997 | 48,845 | 1,502 | 1,585 | 32.519973 | 0.841321 | 0.429256 | 0 | 0.085803 | 0 | 0 | 0.093052 | 0.008685 | 0 | 0 | 0 | 0 | 0 | 1 | 0.174727 | false | 0 | 0.028081 | 0 | 0.299532 | 0.00312 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
830421c0eef174df1951cc79db82af6869f9e1bc | 177 | py | Python | napari_imc/io/__init__.py | neuromusic/napari-imc | ce2ff998b33b49f19a786585cc2cb8e59db74c24 | [
"MIT"
] | 4 | 2021-01-29T15:11:37.000Z | 2021-03-01T02:04:24.000Z | napari_imc/io/__init__.py | neuromusic/napari-imc | ce2ff998b33b49f19a786585cc2cb8e59db74c24 | [
"MIT"
] | 25 | 2021-01-19T01:49:13.000Z | 2022-02-09T10:46:41.000Z | napari_imc/io/__init__.py | neuromusic/napari-imc | ce2ff998b33b49f19a786585cc2cb8e59db74c24 | [
"MIT"
] | 3 | 2021-01-29T17:31:05.000Z | 2022-03-25T10:23:32.000Z | from .imaxt import ImaxtFileReader
from .mcd import McdFileReader
from .txt import TxtFileReader
__all__ = [
'ImaxtFileReader',
'McdFileReader',
'TxtFileReader',
]
| 17.7 | 34 | 0.734463 | 16 | 177 | 7.875 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180791 | 177 | 9 | 35 | 19.666667 | 0.868966 | 0 | 0 | 0 | 0 | 0 | 0.231638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8311c27de6e1db041ba99f1046583892727db0c6 | 43 | py | Python | oed/__init__.py | wgshen/OED | 6928ba31396f2e7dd2bd3701f319e1dad3f91346 | [
"MIT"
] | null | null | null | oed/__init__.py | wgshen/OED | 6928ba31396f2e7dd2bd3701f319e1dad3f91346 | [
"MIT"
] | null | null | null | oed/__init__.py | wgshen/OED | 6928ba31396f2e7dd2bd3701f319e1dad3f91346 | [
"MIT"
] | 1 | 2021-11-10T05:41:02.000Z | 2021-11-10T05:41:02.000Z | from .oed import OED
__all__ = [
"OED"
]
| 7.166667 | 20 | 0.604651 | 6 | 43 | 3.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255814 | 43 | 5 | 21 | 8.6 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
831d6ec37b4d0e0a6e4200545a3b9e01d0fe7f0e | 306 | py | Python | api/permissions.py | soltanoff/simple_file_server | 4e825358341fae0564fc498e8374a3d3cdda199e | [
"MIT"
] | 2 | 2018-06-15T11:39:42.000Z | 2019-08-14T20:55:15.000Z | api/permissions.py | soltanoff/simple_file_server | 4e825358341fae0564fc498e8374a3d3cdda199e | [
"MIT"
] | 7 | 2018-12-04T07:35:24.000Z | 2022-03-11T23:12:10.000Z | api/permissions.py | soltanoff/simple_file_server | 4e825358341fae0564fc498e8374a3d3cdda199e | [
"MIT"
] | null | null | null | from rest_framework import permissions
class IsStaffOrReadOnly(permissions.BasePermission):
"""
Custom permission to only allow admins to edit it.
"""
def has_object_permission(self, request, view, obj):
return request.method in permissions.SAFE_METHODS or request.user.is_staff
| 27.818182 | 82 | 0.751634 | 38 | 306 | 5.921053 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179739 | 306 | 10 | 83 | 30.6 | 0.896414 | 0.163399 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
833ab5ac04df4cc2bfa2f945d2155461c52e1071 | 1,039 | py | Python | yibai-sms-python-sdk-1.0.0/yibai/api/Yibai.py | 100sms/yibai-python-sdk | 9907d0fbf147b5b3ce10e4afed2ac7f19d52af3f | [
"MIT"
] | null | null | null | yibai-sms-python-sdk-1.0.0/yibai/api/Yibai.py | 100sms/yibai-python-sdk | 9907d0fbf147b5b3ce10e4afed2ac7f19d52af3f | [
"MIT"
] | null | null | null | yibai-sms-python-sdk-1.0.0/yibai/api/Yibai.py | 100sms/yibai-python-sdk | 9907d0fbf147b5b3ce10e4afed2ac7f19d52af3f | [
"MIT"
] | 1 | 2019-11-26T11:49:54.000Z | 2019-11-26T11:49:54.000Z | # encoding=utf8
import HttpUtils
class YibaiApiError(Exception):
def __init__(self, code, message):
super(YibaiApiError, self).__init__(message)
self.code = code
class YibaiClient(object):
def __init__(self, server_url, apikey):
self.serverUrl = server_url
self.apikey = apikey
def sms_batch_submit(self, submits):
return self.__execute({'submits': submits}, '/sms/batchSubmit')
def sms_pull_status_report(self):
return self.__execute({}, '/sms/pullStatusReport')
def sms_pull_reply_message(self):
return self.__execute({}, '/sms/pullReply')
def user_info(self):
return self.__execute({}, '/user/info')
def __execute(self, request, url_path):
request['apikey'] = self.apikey
req_url = self.serverUrl + url_path
res = HttpUtils.post_json(req_url, request)
if res['code'] == 200:
return res['response']
raise YibaiApiError(res['code'], res['message'])
| 28.861111 | 72 | 0.624639 | 116 | 1,039 | 5.267241 | 0.37931 | 0.065466 | 0.111293 | 0.10311 | 0.07856 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005155 | 0.253128 | 1,039 | 35 | 73 | 29.685714 | 0.782216 | 0.012512 | 0 | 0 | 0 | 0 | 0.098079 | 0.021234 | 0 | 0 | 0 | 0 | 0 | 1 | 0.291667 | false | 0 | 0.041667 | 0.166667 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
835e51b25ab23118471502fce1356174cfc1f9cc | 137 | py | Python | lab_4/start.py | AnastasiaZheleznyakova/2020-2-level-labs | 926c7abde05f545f27d09a4d96b8014d5a668789 | [
"MIT"
] | null | null | null | lab_4/start.py | AnastasiaZheleznyakova/2020-2-level-labs | 926c7abde05f545f27d09a4d96b8014d5a668789 | [
"MIT"
] | null | null | null | lab_4/start.py | AnastasiaZheleznyakova/2020-2-level-labs | 926c7abde05f545f27d09a4d96b8014d5a668789 | [
"MIT"
] | null | null | null | if __name__ == '__main__':
pass
RESULT = 1
# DO NOT REMOVE NEXT LINE - KEEP IT INTENTIONALLY LAST
assert RESULT == 1, ''
| 22.833333 | 58 | 0.613139 | 18 | 137 | 4.222222 | 0.888889 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020619 | 0.291971 | 137 | 5 | 59 | 27.4 | 0.762887 | 0.379562 | 0 | 0 | 0 | 0 | 0.096386 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | false | 0.25 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
835f677bc91f7df84f3075940f43de8d60abf297 | 84 | py | Python | learning/__init__.py | aleisalem/Maat | 702c88a6a86f0b56e504df8f4d7ba18e8a39c887 | [
"Apache-2.0"
] | 4 | 2019-10-11T12:19:29.000Z | 2020-08-06T21:45:10.000Z | learning/__init__.py | aleisalem/Maat | 702c88a6a86f0b56e504df8f4d7ba18e8a39c887 | [
"Apache-2.0"
] | null | null | null | learning/__init__.py | aleisalem/Maat | 702c88a6a86f0b56e504df8f4d7ba18e8a39c887 | [
"Apache-2.0"
] | 1 | 2021-01-05T11:50:22.000Z | 2021-01-05T11:50:22.000Z | __all__ = ["feature_extraction", "hmm_learner", "scikit_learners", "string_kernel"]
| 42 | 83 | 0.761905 | 9 | 84 | 6.222222 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 84 | 1 | 84 | 84 | 0.717949 | 0 | 0 | 0 | 0 | 0 | 0.678571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8360d7fa109831cb6f6bca8e81a94ffadbaafea4 | 223 | py | Python | primitives_ubc/regCCFS/__init__.py | tonyjo/ubc_primitives | bc94a403f176fe28db2a9ac9d1a48cb9db021f90 | [
"Apache-2.0"
] | null | null | null | primitives_ubc/regCCFS/__init__.py | tonyjo/ubc_primitives | bc94a403f176fe28db2a9ac9d1a48cb9db021f90 | [
"Apache-2.0"
] | 4 | 2020-07-19T00:45:29.000Z | 2020-12-10T18:25:41.000Z | primitives_ubc/regCCFS/__init__.py | tonyjo/ubc_primitives | bc94a403f176fe28db2a9ac9d1a48cb9db021f90 | [
"Apache-2.0"
] | 1 | 2021-04-30T18:13:49.000Z | 2021-04-30T18:13:49.000Z | from .ccfsReg import CanonicalCorrelationForestsRegressionPrimitive
__all__ = ['CanonicalCorrelationForestsRegressionPrimitive']
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__) # type: ignore
| 27.875 | 67 | 0.847534 | 18 | 223 | 9.5 | 0.611111 | 0.116959 | 0.163743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098655 | 223 | 7 | 68 | 31.857143 | 0.850746 | 0.053812 | 0 | 0 | 0 | 0 | 0.220096 | 0.220096 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
55cc899799689985629d17decc9d13ef5c737a0d | 1,252 | py | Python | preparedstatement.py | shgysk8zer0/pyutils | f7fa2ea7717740f05ea739d20cd8a21701835800 | [
"MIT"
] | null | null | null | preparedstatement.py | shgysk8zer0/pyutils | f7fa2ea7717740f05ea739d20cd8a21701835800 | [
"MIT"
] | null | null | null | preparedstatement.py | shgysk8zer0/pyutils | f7fa2ea7717740f05ea739d20cd8a21701835800 | [
"MIT"
] | null | null | null | import sqlite3
class PreparedStatement:
__cursor = None
__sql = ''
__params = {}
def __init__(self, con: sqlite3.Connection, sql: str):
self.__cursor = con.cursor()
self.__sql = sql
self.__params = {}
def __str__(self) -> str:
return self.__sql
def __getitem__(self, key: str):
return self.__params.get(key)
def __delitem__(self, key: str):
del self.__params[key]
def __contains__(self, key: str) -> bool:
return key in self.__params
def __setitem__(self, name: str, value) -> None:
self.__params[name] = value
def __call__(self, **kwargs):
return self.__cursor.execute(self.__sql, {**kwargs, **self.__params})
def bind(self, name: str, value):
self.__params[name] = value
return self
def bindall(self, **kwargs):
self.__params = kwargs
return self
def execute(self, **kwargs):
return self.__cursor.execute(self.__sql, {**self.__params, **kwargs})
def executemany(self, *args):
result = [{**self.__params, **arg} for arg in args]
return self.__cursor.executemany(self.__sql, result)
@property
def params(self) -> dict:
return self.__params
| 26.083333 | 77 | 0.610224 | 147 | 1,252 | 4.693878 | 0.258503 | 0.15942 | 0.056522 | 0.046377 | 0.115942 | 0.115942 | 0.115942 | 0.115942 | 0 | 0 | 0 | 0.002172 | 0.264377 | 1,252 | 47 | 78 | 26.638298 | 0.747014 | 0 | 0 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.342857 | false | 0 | 0.028571 | 0.171429 | 0.742857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
55de8a6657e59552d97157f0e3318b5e7abae0d2 | 323 | py | Python | electsysApi/shared/exception.py | yuxiqian/electsys-api | 52b42729e797f8bdf6a0827e9d62a50919d56d65 | [
"MIT"
] | 5 | 2019-01-21T00:44:33.000Z | 2022-01-03T16:45:25.000Z | electsysApi/shared/exception.py | yuxiqian/electsys-api | 52b42729e797f8bdf6a0827e9d62a50919d56d65 | [
"MIT"
] | 1 | 2021-10-24T00:46:59.000Z | 2021-10-24T00:46:59.000Z | electsysApi/shared/exception.py | yuxiqian/electsys-api | 52b42729e797f8bdf6a0827e9d62a50919d56d65 | [
"MIT"
] | 2 | 2019-01-12T03:18:33.000Z | 2021-06-16T11:19:49.000Z | #!/usr/bin/env python
# encoding: utf-8
'''
@author: yuxiqian
@license: MIT
@contact: akaza_akari@sjtu.edu.cn
@software: electsys-api
@file: electsysApi/shared/exception.py
@time: 2019/1/9
'''
class RequestError(BaseException):
pass
class ParseError(BaseException):
pass
class ParseWarning(Warning):
pass
| 14.043478 | 38 | 0.721362 | 41 | 323 | 5.658537 | 0.878049 | 0.146552 | 0.189655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025271 | 0.142415 | 323 | 22 | 39 | 14.681818 | 0.812274 | 0.560372 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
55ec22a317bb062a3d79bbd46b18d734b28581cf | 58 | py | Python | minimally_sufficient_pandas/__init__.py | dexplo/minimally_sufficient_pandas | d07710f03daa757f5778aa66ee68952d03467809 | [
"BSD-3-Clause"
] | null | null | null | minimally_sufficient_pandas/__init__.py | dexplo/minimally_sufficient_pandas | d07710f03daa757f5778aa66ee68952d03467809 | [
"BSD-3-Clause"
] | null | null | null | minimally_sufficient_pandas/__init__.py | dexplo/minimally_sufficient_pandas | d07710f03daa757f5778aa66ee68952d03467809 | [
"BSD-3-Clause"
] | null | null | null | from ._pandas_accessor import _MSP
__version__ = '0.0.1'
| 14.5 | 34 | 0.758621 | 9 | 58 | 4.111111 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 0.137931 | 58 | 3 | 35 | 19.333333 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0.086207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
55f120e7cddd6dd7d7bb9b4780eee99d7d17ddcc | 797 | py | Python | src/fireo/utils/utils.py | jshep23/FireO | f4ccac8461bcf821ae9665a942847aa9f28ee92b | [
"Apache-2.0"
] | null | null | null | src/fireo/utils/utils.py | jshep23/FireO | f4ccac8461bcf821ae9665a942847aa9f28ee92b | [
"Apache-2.0"
] | null | null | null | src/fireo/utils/utils.py | jshep23/FireO | f4ccac8461bcf821ae9665a942847aa9f28ee92b | [
"Apache-2.0"
] | null | null | null | import re
from google.cloud import firestore
def collection_name(model):
return re.sub('(?!^)([A-Z]+)', r'_\1', model).lower()
def ref_path(key):
return key.split('/')
def collection_path(key):
return '/'.join(key.split('/')[:-1])
def get_parent(key):
return collection_path(key)
def get_parent_doc(key):
return '/'.join(key.split('/')[:-2])
def get_id(key):
try:
return key.split('/')[-1]
except AttributeError:
return None
def GeoPoint(latitude: float, longitude: float):
return firestore.GeoPoint(latitude, longitude)
def get_nested(dict, *args):
if args and dict:
element = args[0]
if element:
value = dict.get(element)
return value if len(args) == 1 else get_nested(value, *args[1:]) | 18.97619 | 76 | 0.61606 | 108 | 797 | 4.444444 | 0.416667 | 0.075 | 0.054167 | 0.066667 | 0.0875 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01129 | 0.222083 | 797 | 42 | 76 | 18.97619 | 0.762903 | 0 | 0 | 0 | 0 | 0 | 0.027569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.32 | false | 0 | 0.08 | 0.24 | 0.76 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3617f1fcc07ed43dd799a0a44d4cb775cd1c7478 | 1,884 | py | Python | blackbook/migrations/0022_cleanup.py | bsiebens/blackbook | 636d1adc8966db158914abba43e360c6a0d23173 | [
"MIT"
] | 1 | 2021-05-10T19:15:48.000Z | 2021-05-10T19:15:48.000Z | blackbook/migrations/0022_cleanup.py | bsiebens/BlackBook | 636d1adc8966db158914abba43e360c6a0d23173 | [
"MIT"
] | 20 | 2020-12-27T15:56:24.000Z | 2021-09-22T18:25:02.000Z | blackbook/migrations/0022_cleanup.py | bsiebens/BlackBook | 636d1adc8966db158914abba43e360c6a0d23173 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2021-01-22 22:56
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('blackbook', '0021_update_account_categories'),
]
operations = [
migrations.RemoveField(
model_name='budgetperiod',
name='budget',
),
migrations.RemoveField(
model_name='transaction',
name='account',
),
migrations.RemoveField(
model_name='transaction',
name='journal_entry',
),
migrations.RemoveField(
model_name='transactionjournalentry',
name='budget',
),
migrations.RemoveField(
model_name='transactionjournalentry',
name='category',
),
migrations.RemoveField(
model_name='transactionjournalentry',
name='from_account',
),
migrations.RemoveField(
model_name='transactionjournalentry',
name='tags',
),
migrations.RemoveField(
model_name='transactionjournalentry',
name='to_account',
),
migrations.RemoveField(
model_name='userprofile',
name='user',
),
migrations.DeleteModel(
name='Account',
),
migrations.DeleteModel(
name='AccountType',
),
migrations.DeleteModel(
name='Budget',
),
migrations.DeleteModel(
name='BudgetPeriod',
),
migrations.DeleteModel(
name='Category',
),
migrations.DeleteModel(
name='Transaction',
),
migrations.DeleteModel(
name='TransactionJournalEntry',
),
migrations.DeleteModel(
name='UserProfile',
),
]
| 25.459459 | 56 | 0.525478 | 126 | 1,884 | 7.738095 | 0.31746 | 0.193846 | 0.24 | 0.276923 | 0.453333 | 0.401026 | 0 | 0 | 0 | 0 | 0 | 0.016007 | 0.369958 | 1,884 | 73 | 57 | 25.808219 | 0.805392 | 0.023885 | 0 | 0.641791 | 1 | 0 | 0.194883 | 0.091453 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014925 | 0 | 0.059701 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3624a7b0fa4de41698f562d63ac67b0fc5033a54 | 1,230 | py | Python | data_access_layer/abstract_classes/customer_dao.py | Alejandro-Fuste/python-bank-application | 46e44c830ab8c13fd64c08e2db4f743a7d1d35de | [
"MIT"
] | null | null | null | data_access_layer/abstract_classes/customer_dao.py | Alejandro-Fuste/python-bank-application | 46e44c830ab8c13fd64c08e2db4f743a7d1d35de | [
"MIT"
] | 15 | 2021-11-22T16:05:42.000Z | 2021-12-08T16:43:37.000Z | data_access_layer/abstract_classes/customer_dao.py | Alejandro-Fuste/python-bank-application | 46e44c830ab8c13fd64c08e2db4f743a7d1d35de | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from entities.customers import Customer
from typing import List
class CustomerDao(ABC):
@abstractmethod
def create_customer_entry(self, customer: Customer) -> Customer:
pass
@abstractmethod
def get_customer_by_id(self, customer_id: str) -> Customer:
pass
@abstractmethod
def get_all_customers(self) -> List[Customer]:
pass
@abstractmethod
def get_customer_balance_by_id(self, customer_id: str, account_id: int) -> float:
pass
@abstractmethod
def deposit_into_account_by_id(self, customer_id: str, account_id: int, deposit: float) -> float:
pass
@abstractmethod
def withdraw_from_account_by_id(self, customer_id: str, account_id: int, withdraw: float) -> float:
pass
@abstractmethod
def transfer_money_by_their_ids(self, customer_id: str, from_account_id: int, to_account_id: int,
transfer_amount: float) -> float:
pass
@abstractmethod
def update_customer_by_id(self, customer_id: str, customer: Customer) -> Customer:
pass
@abstractmethod
def delete_customer_by_id(self, customer_id: int) -> bool:
pass
| 27.954545 | 103 | 0.68374 | 150 | 1,230 | 5.313333 | 0.246667 | 0.19197 | 0.21079 | 0.120452 | 0.570891 | 0.42409 | 0.23463 | 0.23463 | 0.141782 | 0.100376 | 0 | 0 | 0.239024 | 1,230 | 43 | 104 | 28.604651 | 0.851496 | 0 | 0 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28125 | false | 0.28125 | 0.09375 | 0 | 0.40625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
36397c2f3323af879bfcf0a875f647ed132668eb | 273 | py | Python | ex026.py | juniorpedroso/Exercicios-CEV-Python | 4adad3b6f3994cf61f9ead5564124b8b9c58d304 | [
"MIT"
] | null | null | null | ex026.py | juniorpedroso/Exercicios-CEV-Python | 4adad3b6f3994cf61f9ead5564124b8b9c58d304 | [
"MIT"
] | null | null | null | ex026.py | juniorpedroso/Exercicios-CEV-Python | 4adad3b6f3994cf61f9ead5564124b8b9c58d304 | [
"MIT"
] | null | null | null | frase = str(input('Digite uma frase: ').strip().upper())
print('A letra a aparece {} vezes'.format(frase.count('A')))
print('Sua primeira aparição é na posição {}'.format(frase.find('A') + 1))
print('Ela aparece pela última vez na posição {}'.format(frase.rfind('A') + 1))
| 54.6 | 79 | 0.673993 | 43 | 273 | 4.27907 | 0.627907 | 0.179348 | 0.163043 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 0.113553 | 273 | 4 | 80 | 68.25 | 0.752066 | 0 | 0 | 0 | 0 | 0 | 0.457875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
364dafdcafb142ed50351e67323fec6552de2b84 | 325 | py | Python | maskrcnn_benchmark/data/datasets/__init__.py | lipengfeizju/Detection | efe00b221725be05e30fd67957958f97ae42b3cf | [
"MIT"
] | null | null | null | maskrcnn_benchmark/data/datasets/__init__.py | lipengfeizju/Detection | efe00b221725be05e30fd67957958f97ae42b3cf | [
"MIT"
] | null | null | null | maskrcnn_benchmark/data/datasets/__init__.py | lipengfeizju/Detection | efe00b221725be05e30fd67957958f97ae42b3cf | [
"MIT"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
from .coco import COCODataset
from .voc import PascalVOCDataset
from .concat_dataset import ConcatDataset
__all__ = ["COCODataset", "ConcatDataset", "PascalVOCDataset"]
# if isinstance(dataset, datasets.MyDataset):
# return coco_evaluation(**args) | 36.111111 | 71 | 0.778462 | 36 | 325 | 6.861111 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123077 | 325 | 9 | 72 | 36.111111 | 0.866667 | 0.455385 | 0 | 0 | 0 | 0 | 0.229885 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
364e9aa16c6e94c8b02ddd7f95c3691e35f760e3 | 140 | py | Python | Codechef Factorial.py | zoya-0509/Practice-codes | 4d71b1b004f309025c215e55a504c7817b00e8c9 | [
"MIT"
] | null | null | null | Codechef Factorial.py | zoya-0509/Practice-codes | 4d71b1b004f309025c215e55a504c7817b00e8c9 | [
"MIT"
] | null | null | null | Codechef Factorial.py | zoya-0509/Practice-codes | 4d71b1b004f309025c215e55a504c7817b00e8c9 | [
"MIT"
] | null | null | null | t=int(input(""))
while (t>0):
n=int(input(""))
f=1
for i in range(1,n+1):
f=f*i
print(f)
t=t-1
| 15.555556 | 28 | 0.378571 | 26 | 140 | 2.038462 | 0.5 | 0.301887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060241 | 0.407143 | 140 | 9 | 29 | 15.555556 | 0.578313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.