hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9122373b59bc6d414c05d8fc25de317a2c149243 | 86 | py | Python | move_randomly/move_randomly.py | VeryHardBit/turtlebot2-slam-nav-bash | 1f8e24f886182eca6cbfad2a380501a53158f1a9 | [
"Apache-2.0"
] | null | null | null | move_randomly/move_randomly.py | VeryHardBit/turtlebot2-slam-nav-bash | 1f8e24f886182eca6cbfad2a380501a53158f1a9 | [
"Apache-2.0"
] | null | null | null | move_randomly/move_randomly.py | VeryHardBit/turtlebot2-slam-nav-bash | 1f8e24f886182eca6cbfad2a380501a53158f1a9 | [
"Apache-2.0"
] | null | null | null | import rospy
from sensor_msgs import LaserScan
rospy.init_node("druken_turtlebot")
| 12.285714 | 35 | 0.825581 | 12 | 86 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 86 | 6 | 36 | 14.333333 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0.188235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
91412141c5573fcede9af51a40cf2cd18453d71a | 41 | py | Python | backend/tornado_api/app/__init__.py | andredias/spa-study-case | f1cf7f011f0be761c4d3ed2df61c1ef139cf6168 | [
"MIT"
] | 1 | 2021-05-26T13:44:20.000Z | 2021-05-26T13:44:20.000Z | backend/tornado_api/app/__init__.py | andredias/spa-study-case | f1cf7f011f0be761c4d3ed2df61c1ef139cf6168 | [
"MIT"
] | 2 | 2020-07-29T23:08:19.000Z | 2020-08-12T01:58:20.000Z | backend/tornado_api/app/__init__.py | andredias/spa-study-case | f1cf7f011f0be761c4d3ed2df61c1ef139cf6168 | [
"MIT"
] | null | null | null | from .handlers import login # noqa:F401
| 20.5 | 40 | 0.756098 | 6 | 41 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 0.170732 | 41 | 1 | 41 | 41 | 0.823529 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e66c5e4cdf37f92cef88b60a1d1c626fa866f770 | 153 | py | Python | feedbackform/errors.py | aminbeigi/Feedback-Form | 2d3b9a96feba35d9e94b6b8cd8f5f287377cdce5 | [
"MIT"
] | null | null | null | feedbackform/errors.py | aminbeigi/Feedback-Form | 2d3b9a96feba35d9e94b6b8cd8f5f287377cdce5 | [
"MIT"
] | null | null | null | feedbackform/errors.py | aminbeigi/Feedback-Form | 2d3b9a96feba35d9e94b6b8cd8f5f287377cdce5 | [
"MIT"
] | null | null | null | from flask import render_template
from feedbackform import app
@app.errorhandler(404)
def page_not_found(e):
return render_template('404.html'), 404 | 25.5 | 43 | 0.797386 | 23 | 153 | 5.130435 | 0.695652 | 0.237288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.117647 | 153 | 6 | 43 | 25.5 | 0.807407 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 5 |
e6773d5c5a531ca67aeafc572b8ef7a9345594aa | 75 | py | Python | Art/myname.py | chanchon11/artchanchon | 9642dc530539c9ccbf4066c362a42d93bf1bcc53 | [
"MIT"
] | null | null | null | Art/myname.py | chanchon11/artchanchon | 9642dc530539c9ccbf4066c362a42d93bf1bcc53 | [
"MIT"
] | null | null | null | Art/myname.py | chanchon11/artchanchon | 9642dc530539c9ccbf4066c362a42d93bf1bcc53 | [
"MIT"
] | null | null | null | #myname.py
def fullname():
print('My name is Chanchon')
print('Helllo')
| 12.5 | 29 | 0.68 | 11 | 75 | 4.636364 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146667 | 75 | 5 | 30 | 15 | 0.796875 | 0.12 | 0 | 0 | 0 | 0 | 0.390625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
e686d336550b6cd1055234d06136442a5f632cbf | 62 | py | Python | l3ns/ldc/__init__.py | OlegJakushkin/l3ns | 320184cb03837b9d6d13cb6ff006263ad1a99544 | [
"MIT"
] | 3 | 2021-04-02T11:05:54.000Z | 2021-12-17T17:46:02.000Z | l3ns/ldc/__init__.py | OlegJakushkin/l3ns | 320184cb03837b9d6d13cb6ff006263ad1a99544 | [
"MIT"
] | 1 | 2020-10-31T08:36:11.000Z | 2020-10-31T08:36:11.000Z | l3ns/ldc/__init__.py | OlegJakushkin/l3ns | 320184cb03837b9d6d13cb6ff006263ad1a99544 | [
"MIT"
] | 1 | 2020-06-08T03:48:58.000Z | 2020-06-08T03:48:58.000Z | from .node import DockerNode
from .subnet import DockerSubnet
| 20.666667 | 32 | 0.83871 | 8 | 62 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 62 | 2 | 33 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e6bc215ea269bd43323b8976cd9b417cb4b7728e | 174 | py | Python | src/drstorage/models/__init__.py | chelling87/drstorage | 5d69cdd01306c8d890ace1b4277b64f50efa5114 | [
"BSD-3-Clause"
] | null | null | null | src/drstorage/models/__init__.py | chelling87/drstorage | 5d69cdd01306c8d890ace1b4277b64f50efa5114 | [
"BSD-3-Clause"
] | null | null | null | src/drstorage/models/__init__.py | chelling87/drstorage | 5d69cdd01306c8d890ace1b4277b64f50efa5114 | [
"BSD-3-Clause"
] | null | null | null | from .base import generic
from .f1 import F1_600, F1_1200
from .x2m import X2M_157
from .x2b import X2B_400
__all__ = ["generic", "F1_600", "F1_1200", "X2M_157", "X2B_400"]
| 24.857143 | 64 | 0.729885 | 31 | 174 | 3.709677 | 0.387097 | 0.086957 | 0.121739 | 0.191304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.248322 | 0.143678 | 174 | 6 | 65 | 29 | 0.52349 | 0 | 0 | 0 | 0 | 0 | 0.195402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e6c43b41b14e7eba597babe81215f2c90026e207 | 310 | py | Python | 09/00/0.py | pylangstudy/201711 | be6222dde61373f67d25a2c926868b602463c5cc | [
"CC0-1.0"
] | null | null | null | 09/00/0.py | pylangstudy/201711 | be6222dde61373f67d25a2c926868b602463c5cc | [
"CC0-1.0"
] | 2 | 2017-10-31T23:37:36.000Z | 2017-11-02T23:31:07.000Z | 09/00/0.py | pylangstudy/201711 | be6222dde61373f67d25a2c926868b602463c5cc | [
"CC0-1.0"
] | null | null | null | import secrets
print(secrets.choice([100,200,300]))#100,200,300
print(secrets.randbelow(10))#0〜10
print(secrets.randbits(8))#0〜255(2**8())
print(secrets.token_bytes(8))
print(secrets.token_hex(8))
print(secrets.token_urlsafe(8))
print(secrets.compare_digest('a','a'))
print(secrets.compare_digest('a','b'))
| 23.846154 | 48 | 0.741935 | 54 | 310 | 4.203704 | 0.425926 | 0.422907 | 0.229075 | 0.237885 | 0.229075 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.041935 | 310 | 12 | 49 | 25.833333 | 0.646465 | 0.090323 | 0 | 0 | 0 | 0 | 0.014337 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0.888889 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
e6e7f6ff02dad13afb720fc3e025d5dc6b36dc2b | 26,736 | py | Python | kwhelp/rules/__init__.py | Amourspirit/python-kwargshelper | 4851ad69cf26f0656bc4264c70f956226bf5017e | [
"MIT"
] | null | null | null | kwhelp/rules/__init__.py | Amourspirit/python-kwargshelper | 4851ad69cf26f0656bc4264c70f956226bf5017e | [
"MIT"
] | 4 | 2021-10-16T20:11:42.000Z | 2021-12-11T09:54:06.000Z | kwhelp/rules/__init__.py | Amourspirit/python-kwargshelper | 4851ad69cf26f0656bc4264c70f956226bf5017e | [
"MIT"
] | null | null | null | # coding: utf-8
from abc import ABC, abstractmethod
import numbers
import os
from pathlib import Path
from typing import Optional
from ..helper import is_iterable
# region Interface
class IRule(ABC):
"""
Abstract Interface Class for rules
See Also:
:doc:`/source/general/rules`
"""
def __init__(self, key: str, name: str, value: object, raise_errors: bool, originator: object):
"""
Constructor
Args:
key (str): the key that rule is to apply to.
name (str): the name of the field that value was assigned
value (object): the value that is assigned to ``field_name``
raise_errors (bool): determinins if rule could raise an error when validation fails
originator (object): the object that attributes validated for
Raises:
TypeError: If any arg is not of the correct type
"""
if not isinstance(name, str):
msg = self._get_type_error_msg(name, 'name', 'str')
raise TypeError(msg)
self._name: str = name
if not isinstance(key, str):
msg = self._get_type_error_msg(key, 'key', 'str')
raise TypeError(msg)
self._key: str = key
if not isinstance(raise_errors, bool):
msg = self._get_type_error_msg(
raise_errors, 'raise_errors', 'bool')
raise TypeError(msg)
self._raise_errors = raise_errors
self._value: object = value
self._originator: object = originator
# region Abstract Methods
@abstractmethod
def validate(self) -> bool:
'''Gets attrib field and value are valid'''
# endregion Abstract Methods
def _get_type_error_msg(self, arg: Optional[object] = None, arg_name: Optional[str] = None, expected_type: Optional[str] = None) -> str:
_arg = self.field_value if arg is None else arg
_arg_name = self.key if arg_name is None else arg_name
if expected_type:
msg = f"Argument Error: '{_arg_name}' is expecting type of '{expected_type}'. Got type of '{type(_arg).__name__}'"
else:
msg = f"Argument Error: '{_arg_name}' is not expecting '{type(_arg).__name__}'"
return msg
def _get_not_type_error_msg(self, arg: Optional[object] = None, arg_name: Optional[str] = None, not_type: Optional[str] = None) -> str:
_arg = self.field_value if arg is None else arg
_arg_name = self.key if arg_name is None else arg_name
if not_type:
msg = f"Argument Error: '{_arg_name}' is expecting non '{not_type}'. Got type of '{type(_arg).__name__}'"
else:
msg = f"Argument Error: '{_arg_name}' is expecting non '{type(_arg).__name__}'."
return msg
# region Properties
@property
def field_name(self) -> str:
'''
Name of the field assigned.
:getter: Gets the name of the field assigned
:setter: Sets the name of the field assigned
'''
return self._name
@field_name.setter
def field_name(self, value: str):
if not isinstance(value, str):
msg = self._get_type_error_msg(value, 'field_name', 'str')
raise TypeError(msg)
self._name = value
@property
def field_value(self) -> object:
"""
The value assigned to ``field_name``
:getter: Gets value assigned to ``field_name``
:setter: Sets value assigned to ``field_name``
"""
return self._value
@field_value.setter
def field_value(self, value: object):
self._value = value
@property
def key(self) -> str:
'''Gets the key currently being read'''
return self._key
@key.setter
def key(self, value: str):
if not isinstance(value, str):
msg = self._get_type_error_msg(value, 'key', 'str')
raise TypeError(msg)
self._key = value
@property
def raise_errors(self) -> bool:
"""
Determines if a rule can raise an error when validation fails.
:getter: Gets if a rule could raise an error when validation fails
:setter: Sets if a rule could raise an error when validation fails
"""
return self._raise_errors
@raise_errors.setter
def raise_errors(self, value: bool):
if not isinstance(value, bool):
msg = self._get_type_error_msg(value, 'raise_errors', 'bool')
raise TypeError(msg)
self._raise_errors = value
@property
def originator(self) -> object:
'''Gets object that attributes validated for'''
return self._originator
# endregion Properties
# endregion Interface
# region Attrib rules
class RuleAttrNotExist(IRule):
'''
Rule to ensure an attribute does not exist before it is added to class.
'''
def validate(self) -> bool:
"""
Validates that ``field_name`` is not an existing attribute of ``originator`` instance.
Raises:
AttributeError: If ``raise_errors`` is ``True`` and ``field_name`` is already an attribue of ``originator`` instance.
Returns:
bool: ``True`` if ``field_name`` is not an existing attribue of ``originator`` instance;
Otherwise, ``False``.
"""
result = not hasattr(self.originator, self.field_name)
if result == False and self.raise_errors == True:
raise AttributeError(
f"'{self.field_name}' attribute already exist in current instance of '{type(self.originator).__name__}'")
return result
class RuleAttrExist(IRule):
'''
Rule to ensure an attribute does exist before its value is set.
'''
def validate(self) -> bool:
"""
Validates that ``field_name`` is an existing attribute of ``originator`` instance.
Raises:
AttributeError: If ``raise_errors`` is ``True`` and ``field_name`` is not an attribue of ``originator`` instance.
Returns:
bool: ``True`` if ``field_name`` is an existing attribue of ``originator`` instance;
Otherwise, ``False``.
"""
result = hasattr(self.originator, self.field_name)
if result == False and self.raise_errors == True:
raise AttributeError(
f"'{self.field_name}' attribute does not exist in current instance of '{type(self.originator).__name__}'")
return result
# endregion Attrib rules
# region None
class RuleNone(IRule):
'''
Rule that matched only if value is ``None``.
'''
def validate(self) -> bool:
"""
Validates that value to assign to attribute is ``None``.
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value`` is not ``None``.
Returns:
bool: ``True`` if ``field_value`` is ``None``; Otherwise, ``False``.
"""
if self.field_value is not None:
if self.raise_errors:
raise ValueError(
f"Arg error: {self.key} must be assigned a value")
return False
return True
class RuleNotNone(IRule):
'''
Rule that matched only if value is not ``None``.
'''
def validate(self) -> bool:
"""
Validates that value to assign to attribute is not ``None``.
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value`` is ``None``.
Returns:
bool: ``True`` if ``field_value`` is not ``None``; Otherwise, ``False``.
"""
if self.field_value is None:
if self.raise_errors:
raise ValueError(
f"Arg error: {self.key} must be assigned a value")
return False
return True
# endregion None
# region Number
class RuleNumber(IRule):
'''
Rule that matched only if value is a valid number.
Note:
If value is a of type ``bool`` then validation will fail for this rule.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a number
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not a number.
Returns:
bool: ``True`` if ``field_value`` is a number; Otherwise, ``False``.
"""
# isinstance(False, int) is True
# print(int(True)) 1
# print(int(False)) 0
if not isinstance(self.field_value, numbers.Number) or isinstance(self.field_value, bool):
if self.raise_errors:
raise TypeError(self._get_type_error_msg(
self.field_value, self.key, 'Number'))
return False
return True
# region Integer
class RuleInt(IRule):
'''
Rule that matched only if value is instance of ``int``.
Note:
If value is a of type ``bool`` then validation will fail for this rule.
'''
def validate(self) -> bool:
"""
Validates that value to assign is an int
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not an int.
Returns:
bool: ``True`` if ``field_value`` is an ``int``; Otherwise, ``False``.
"""
# isinstance(False, int) is True
# print(int(True)) 1
# print(int(False)) 0
if not isinstance(self.field_value, int) or isinstance(self.field_value, bool):
if self.raise_errors:
raise TypeError(self._get_type_error_msg(expected_type='int'))
return False
return True
class RuleIntZero(RuleInt):
'''
Rule that matched only if value is equal to ``0``.
'''
def validate(self) -> bool:
"""
Validates that value to assign is equal to ``0`` int.
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value`` is not equal to ``0`` int.
Returns:
bool: ``True`` if ``field_value`` equals ``0`` int; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value != 0:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be equal to 0 int value")
return False
return True
class RuleIntPositive(RuleInt):
'''
Rule that matched only if value is equal or greater than ``0``.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a posivite int
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not a positive int.
Returns:
bool: ``True`` if ``field_value`` is a positive int; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value < 0:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be a positive int value")
return False
return True
class RuleIntNegative(RuleInt):
'''
Rule that matched only if value is less than ``0``.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a negative int
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not a negative int.
Returns:
bool: ``True`` if ``field_value`` is a negative int; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value >= 0:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be a negative int value")
return False
return True
class RuleIntNegativeOrZero(RuleInt):
'''
Rule that matched only if value is equal or less than ``0``.
'''
def validate(self) -> bool:
"""
Validates that value to assign is equal to zero or a negative int
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not a negative int.
Returns:
bool: ``True`` if ``field_value`` is equal to zero or a negative int; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value > 0:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be equal to zero or a negative int value")
return False
return True
class RuleByteUnsigned(RuleInt):
'''
Unsigned Byte rule, range from ``0`` to ``255``.
'''
def validate(self) -> bool:
"""
Valids
Raises:
ValueError: If ``raise_errors`` is ``False`` and value is less then ``0`` or greater than ``255``.
Returns:
bool: ``True`` if Validation passes; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value < 0 or self.field_value > 255:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be a num from 0 to 255")
return False
return True
class RuleByteSigned(RuleInt):
'''
Signed Byte rule, range from ``-128`` to ``127``.
'''
def validate(self) -> bool:
"""
Valids
Raises:
ValueError: If ``raise_errors`` is ``False`` and value is less then ``-128`` or greater than ``128``.
Returns:
bool: ``True`` if Validation passes; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value < -128 or self.field_value > 127:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be a num from -128 to 127")
return False
return True
# endregion Integer
# region Float Rules
class RuleFloat(IRule):
'''
Rule that matched only if value is to type ``float``.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a float
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not a float.
Returns:
bool: ``True`` if ``field_value`` is a positive float; Otherwise, ``False``.
"""
if not isinstance(self.field_value, float):
if self.raise_errors:
raise TypeError(self._get_type_error_msg(
self.field_value, self.key, 'float'))
return False
return True
class RuleFloatZero(RuleFloat):
'''
Rule that matched only if value is equal to ``0.0``.
'''
def validate(self) -> bool:
"""
Validates that value to assign equals ``0.0`` float
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value`` is not equal to ``0.0`` float.
Returns:
bool: ``True`` if ``field_value`` equals ``0.0`` float; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value != 0.0:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be equal to 0.0 float value")
return False
return True
class RuleFloatPositive(RuleFloat):
'''
Rule that matched only if value is equal or greater than ``0.0``.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a positive float
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value`` is not a positive float.
Returns:
bool: ``True`` if ``field_value`` is a positive float; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value < 0.0:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be a positive float value")
return False
return True
class RuleFloatNegative(RuleFloat):
'''
Rule that matched only if value is less than ``0.0``.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a negative float
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value`` is not a negative float.
Returns:
bool: ``True`` if ``field_value`` is a negative float; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value >= 0.0:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be a negative float value")
return False
return True
class RuleFloatNegativeOrZero(RuleFloat):
'''
Rule that matched only if value is equal or less than ``0.0``.
'''
def validate(self) -> bool:
"""
Validates that value to assign is equal to ``0.0`` or a negative float
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value`` is not a negative float.
Returns:
bool: ``True`` if ``field_value`` is equal to ``0.0`` or a negative float; Otherwise, ``False``.
"""
if not super().validate():
return False
if self.field_value > 0.0:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must be equal to 0.0 or a negative float value")
return False
return True
# endregion Float Rules
# endregion Number
# region String
class RuleStr(IRule):
'''
Rule that matched only if value is of type ``str``.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a string
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not instance of string.
Returns:
bool: ``True`` if ``field_value`` is a string; Otherwise, ``False``.
"""
if not isinstance(self.field_value, str):
if self.raise_errors:
raise TypeError(self._get_type_error_msg(
self.field_value, self.key, 'str'))
return False
return True
class RuleStrEmpty(RuleStr):
'''
Rule that matched only if value is equal to empty string.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a string and is an empty string.
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value``
is not an empty string.
Returns:
bool: ``True`` if value is an empty string; Otherwise; ``False``.
"""
if not super().validate():
return False
value = self.field_value
if len(value) != 0:
if self.raise_errors:
raise ValueError(
f"Arg error: {self.key} must be empty str")
return False
return True
class RuleStrNotNullOrEmpty(RuleStr):
'''
Rule that matched only if value is not ``None`` or empty string.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a string and is not a empty string.
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value``
is not instance of string or is empty string
Returns:
bool: ``True`` if value is valid; Otherwise; ``False``.
"""
if not super().validate():
return False
value = self.field_value
if len(value) == 0:
if self.raise_errors:
raise ValueError(
f"Arg error: {self.key} must not be empty str")
return False
return True
class RuleStrNotNullEmptyWs(RuleStrNotNullOrEmpty):
'''
Rule that matched only if value is not ``None``, empty or whitespace.
'''
def validate(self) -> bool:
"""
Validates that value to assign is a string and is not a empty or whitespace string.
Raises:
ValueError: If ``raise_errors`` is ``True`` and ``field_value``
is not instance of string or is empty or whitespace string
Returns:
bool: ``True`` if value is valid; Otherwise; ``False``.
"""
if not super().validate():
return False
value = self.field_value.strip()
if len(value) == 0:
if self.raise_errors:
raise ValueError(
f"Arg error: '{self.key}' must not be empty or whitespace str")
return False
return True
# endregion String
# region boolean
class RuleBool(IRule):
"""
Rule that matched only if value is instance of bool.
"""
def validate(self) -> bool:
"""
Validates that value to assign is a bool
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not instance of bool.
Returns:
bool: ``True`` if ``field_value`` is a bool; Otherwise, ``False``.
"""
if not isinstance(self.field_value, bool):
if self.raise_errors:
raise TypeError(self._get_type_error_msg(expected_type='bool'))
return False
return True
# endregion boolean
# region Iterable
class RuleIterable(IRule):
"""
Rule that matched only if value is iterable such as list, tuple, set.
"""
def validate(self) -> bool:
"""
Validates that value to assign is iterable
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not iterable.
Returns:
bool: ``True`` if ``field_value`` is a iterable; Otherwise, ``False``.
"""
if not is_iterable(self.field_value):
if self.raise_errors:
raise TypeError(self._get_type_error_msg(expected_type="iterable"))
return False
return True
class RuleNotIterable(IRule):
"""
Rule that matched only if value is not iterable.
"""
def validate(self) -> bool:
"""
Validates that value to assign is not iterable
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is iterable.
Returns:
bool: ``True`` if ``field_value`` is a not iterable; Otherwise, ``False``.
"""
if is_iterable(self.field_value):
if self.raise_errors:
raise TypeError(self._get_not_type_error_msg(not_type="iterable"))
return False
return True
# endregion Iterable
# region Path
class RulePath(IRule):
"""
Rule that matched only if value is instance of ``Path``.
"""
def validate(self) -> bool:
"""
Validates that value to assign is a Path
Raises:
TypeError: If ``raise_errors`` is ``True`` and ``field_value`` is not instance of Path.
Returns:
bool: ``True`` if ``field_value`` is a bool; Otherwise, ``False``.
"""
if not isinstance(self.field_value, Path):
if self.raise_errors:
raise TypeError(self._get_type_error_msg(expected_type='Path'))
return False
return True
class RulePathExist(RulePath):
"""
Rule that matched only if value is instance of ``Path`` and path exist.
"""
def validate(self) -> bool:
"""
Validates that value to assign is a path that exist
Raises:
FileNotFoundError: If ``raise_errors`` is ``True`` and ``field_value`` is path does not exist.
Returns:
bool: ``True`` if ``field_value`` is an existing path; Otherwise, ``False``.
"""
if not super().validate():
return False
if not os.path.exists(self.field_value):
if self.raise_errors:
raise FileNotFoundError(
f"Unable to find path: '{self.field_value}'")
return False
return True
class RulePathNotExist(RulePath):
"""
Rule that matched only if value is instance of ``Path`` and path does not exist.
"""
def validate(self) -> bool:
"""
Validates that value to assign is a Path that does not exist
Raises:
FileExistsError: If ``raise_errors`` is ``True`` and ``field_value`` is path that is existing.
Returns:
bool: ``True`` if ``field_value`` is a p ath that does not exist; Otherwise, ``False``.
"""
if not super().validate():
return False
if os.path.exists(self.field_value):
if self.raise_errors:
raise FileExistsError(
f"File already exist: '{self.field_value}'")
return False
return True
class RuleStrPathExist(RuleStr):
"""
Rule that matched only if value is instance of str and path exist.
"""
def validate(self) -> bool:
"""
Validates that value to assign is a str path that exist
Raises:
FileNotFoundError: If ``raise_errors`` is ``True`` and ``field_value`` is path does not exist.
Returns:
bool: ``True`` if ``field_value`` is a path that does exist; Otherwise, ``False``.
"""
if not super().validate():
return False
if not os.path.exists(self.field_value):
if self.raise_errors:
raise FileNotFoundError(
f"Unable to find path: '{self.field_value}'")
return False
return True
class RuleStrPathNotExist(RuleStr):
"""
Rule that matched only if value is instance of str and path is not existing.
"""
def validate(self) -> bool:
"""
Validates that value to assign is a path not existing
Raises:
FileExistsError: If ``raise_errors`` is ``True`` and ``field_value`` is a path that is not existing.
Returns:
bool: ``True`` if ``field_value`` is a path that does not exist; Otherwise, ``False``.
"""
if not super().validate():
return False
if os.path.exists(self.field_value):
if self.raise_errors:
raise FileExistsError(
f"File already exist: '{self.field_value}'")
return False
return True
# end Region Path
| 30.278596 | 140 | 0.565305 | 3,224 | 26,736 | 4.587159 | 0.064826 | 0.060856 | 0.038136 | 0.038542 | 0.818987 | 0.78457 | 0.77037 | 0.738454 | 0.684901 | 0.627899 | 0 | 0.005012 | 0.328396 | 26,736 | 882 | 141 | 30.312925 | 0.818612 | 0.401855 | 0 | 0.613569 | 0 | 0.011799 | 0.118092 | 0.011817 | 0 | 0 | 0 | 0 | 0 | 1 | 0.123894 | false | 0 | 0.017699 | 0 | 0.466077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fc17317887686c685f1e523699fe5f4bb72bdb2e | 100 | py | Python | homework/numtowords/greetings.py | maksym-bielyshev/dp_189_taqc | 67e0ecb68bef7dd710ef1a8248816efd7d834e59 | [
"MIT"
] | null | null | null | homework/numtowords/greetings.py | maksym-bielyshev/dp_189_taqc | 67e0ecb68bef7dd710ef1a8248816efd7d834e59 | [
"MIT"
] | null | null | null | homework/numtowords/greetings.py | maksym-bielyshev/dp_189_taqc | 67e0ecb68bef7dd710ef1a8248816efd7d834e59 | [
"MIT"
] | null | null | null | """Function with a greeting."""
def hello():
"""Printing the greeting."""
print("Hello!")
| 14.285714 | 32 | 0.58 | 11 | 100 | 5.272727 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 100 | 6 | 33 | 16.666667 | 0.725 | 0.48 | 0 | 0 | 0 | 0 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
fc1dab790df79850b337cf2249f753dc77d6a885 | 7,280 | py | Python | tests/tests_task_manager.py | Bizarious/mra-discord | 9d98d7dd15be00ffa9f24958bbaea2980272e713 | [
"MIT"
] | null | null | null | tests/tests_task_manager.py | Bizarious/mra-discord | 9d98d7dd15be00ffa9f24958bbaea2980272e713 | [
"MIT"
] | 6 | 2021-12-09T16:41:15.000Z | 2022-01-25T23:44:14.000Z | tests/tests_task_manager.py | Bizarious/mra-discord | 9d98d7dd15be00ffa9f24958bbaea2980272e713 | [
"MIT"
] | null | null | null | from unittest import TestCase
from datetime import datetime as dt, timedelta as td
from core.task.task_control import TaskManager
from core.system import IPC
from core.database import Data
from core.containers import TransferPackage
class TaskManagerTests(TestCase):
tm: TaskManager
ipc: IPC
data: Data
t: TransferPackage
def setUp(self) -> None:
self.ipc = IPC()
self.ipc.create_queues("bot", "task")
self.data = Data()
self.tm = TaskManager(data=self.data, ipc=self.ipc)
self.tm.paths = {"../src/tasks": "tasks"}
self.tm.register_all_tasks()
self.t = TransferPackage()
self.t.pack(author_id=0,
channel_id=0,
message="test",
message_args="",
date_string="1h",
label="test",
number=0
)
self.t.label(dst="task",
cmd="task",
task="Reminder",
author_id=0,
channel_id=0
)
self.tm.add_task(self.t)
def tearDown(self) -> None:
self.data.set_json(file="tasks", data=[])
def test_add_task_right_next_date(self):
self.tm.add_task(self.t)
right_next_time = dt.now().replace(microsecond=0) + td(hours=1)
self.assertEqual(self.tm.next_date, right_next_time, msg="Wrong next date")
def test_add_second_later_task_right_date(self):
t2 = TransferPackage()
t2.pack(author_id=0,
channel_id=0,
message="test",
message_args="",
date_string="2h",
label="test",
number=0
)
t2.label(dst="task",
cmd="task",
task="Reminder",
author_id=0,
channel_id=0
)
self.tm.add_task(t2)
right_next_time = dt.now().replace(microsecond=0) + td(hours=1)
self.assertEqual(self.tm.next_date, right_next_time, msg="Wrong next date")
def test_add_second_earlier_task_right_date(self):
t2 = TransferPackage()
t2.pack(author_id=0,
channel_id=0,
message="test",
message_args="",
date_string="30m",
label="test",
number=0
)
t2.label(dst="task",
cmd="task",
task="Reminder",
author_id=0,
channel_id=0
)
self.tm.add_task(t2)
right_next_time = dt.now().replace(microsecond=0) + td(minutes=30)
self.assertEqual(self.tm.next_date, right_next_time, msg="Wrong next date")
def test_remove_only_task_right_date(self):
task = self.tm.get_task(0, 0)
self.tm.delete_task(task)
self.assertEqual(None, self.tm.next_date)
def test_remove_first_task_right_date(self):
t2 = TransferPackage()
t2.pack(author_id=0,
channel_id=0,
message="test",
message_args="",
date_string="2h",
label="test",
number=0
)
t2.label(dst="task",
cmd="task",
task="Reminder",
author_id=0,
channel_id=0
)
self.tm.add_task(t2)
task = self.tm.get_task(0, 0)
self.tm.delete_task(task)
right_next_time = dt.now().replace(microsecond=0) + td(hours=2)
self.assertEqual(self.tm.next_date, right_next_time, msg="Wrong next date")
def test_remove_second_task_right_date(self):
t2 = TransferPackage()
t2.pack(author_id=0,
channel_id=0,
message="test",
message_args="",
date_string="2h",
label="test",
number=0
)
t2.label(dst="task",
cmd="task",
task="Reminder",
author_id=0,
channel_id=0
)
self.tm.add_task(t2)
task = self.tm.get_task(1, 0)
self.tm.delete_task(task)
right_next_time = dt.now().replace(microsecond=0) + td(hours=1)
self.assertEqual(self.tm.next_date, right_next_time, msg="Wrong next date")
def test_remove_all_tasks_right_date(self):
t2 = TransferPackage()
t2.pack(author_id=0,
channel_id=0,
message="test",
message_args="",
date_string="2h",
label="test",
number=0
)
t2.label(dst="task",
cmd="task",
task="Reminder",
author_id=0,
channel_id=0
)
self.tm.add_task(t2)
self.tm.delete_all_tasks(0)
self.assertEqual(self.tm.next_date, None, msg="Wrong next date")
def test_add_second_task_different_user_right_date(self):
t2 = TransferPackage()
t2.pack(author_id=1,
channel_id=0,
message="test",
message_args="",
date_string="30m",
label="test",
number=0
)
t2.label(dst="task",
cmd="task",
task="Reminder",
author_id=1,
channel_id=0
)
self.tm.add_task(t2)
right_next_time = dt.now().replace(microsecond=0) + td(minutes=30)
self.assertEqual(self.tm.next_date, right_next_time, msg="Wrong next date")
def test_load_tasks_right_date(self):
t2 = TransferPackage()
t2.pack(author_id=1,
channel_id=0,
message="test",
message_args="",
date_string="30m",
label="test",
number=0
)
t2.label(dst="task",
cmd="task",
task="Reminder",
author_id=1,
channel_id=0
)
self.tm.add_task(t2)
right_next_time = dt.now().replace(microsecond=0) + td(minutes=30)
self.tm.next_date = None
while not self.tm.task_queue.empty():
self.tm.task_queue.get()
self.tm.tasks = {}
self.tm.import_tasks(self.data.get_json(file="tasks"))
self.assertEqual(right_next_time, self.tm.next_date)
def test_delete_all_tasks_multiple_users(self):
t2 = TransferPackage()
t2.pack(author_id=1,
channel_id=0,
message="test",
message_args="",
date_string="30m",
label="test",
number=0
)
t2.label(dst="task",
cmd="task",
task="Reminder",
author_id=1,
channel_id=0
)
self.tm.add_task(t2)
self.tm.delete_all_tasks(0)
self.tm.delete_all_tasks(1)
self.assertEqual(None, self.tm.next_date)
| 29.236948 | 83 | 0.493544 | 819 | 7,280 | 4.17094 | 0.10989 | 0.064988 | 0.052693 | 0.056206 | 0.775176 | 0.763466 | 0.741218 | 0.721897 | 0.712529 | 0.712529 | 0 | 0.026394 | 0.396291 | 7,280 | 248 | 84 | 29.354839 | 0.750853 | 0 | 0 | 0.699029 | 0 | 0 | 0.051786 | 0 | 0 | 0 | 0 | 0 | 0.048544 | 1 | 0.058252 | false | 0 | 0.033981 | 0 | 0.116505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fc2495005e183e82eac1729a0ed330d070b59e8c | 1,047 | py | Python | setup.py | doorknob6/WCLApi | f9edcb11b74dbbd7664c308f286a9c23dbd5d88b | [
"MIT"
] | null | null | null | setup.py | doorknob6/WCLApi | f9edcb11b74dbbd7664c308f286a9c23dbd5d88b | [
"MIT"
] | null | null | null | setup.py | doorknob6/WCLApi | f9edcb11b74dbbd7664c308f286a9c23dbd5d88b | [
"MIT"
] | null | null | null | from distutils.core import setup
setup(
name="WCLApi",
packages=["WCLApi"],
version="0.4.0",
license="MIT",
description="Python tools to communicate with the Wacraftlogs website API.",
author="doorknob6",
author_email="joopkjongste@gmail.com",
url="https://github.com/doorknob6/WCLApi",
download_url="https://github.com/doorknob6/WCLApi/archive/master.tar.gz",
keywords=["Nexushub", "API"],
install_requires=["requests", "requests-toolbelt"],
classifiers=[
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
],
) | 37.392857 | 80 | 0.617956 | 110 | 1,047 | 5.854545 | 0.572727 | 0.206522 | 0.271739 | 0.282609 | 0.099379 | 0.099379 | 0 | 0 | 0 | 0 | 0 | 0.024631 | 0.224451 | 1,047 | 28 | 81 | 37.392857 | 0.768473 | 0 | 0 | 0 | 0 | 0 | 0.611641 | 0.020992 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.037037 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fc73e9640c75aeb1925c131b52e05412b7b89f08 | 1,577 | py | Python | flockos/apis/channels.py | bilmyers/pyflock | b440ffbcd6a18c0d81b81dcdcbae7ae16c025d39 | [
"Apache-2.0"
] | 14 | 2017-02-14T07:02:59.000Z | 2022-03-30T13:59:59.000Z | flockos/apis/channels.py | bilmyers/pyflock | b440ffbcd6a18c0d81b81dcdcbae7ae16c025d39 | [
"Apache-2.0"
] | 10 | 2016-10-22T20:52:00.000Z | 2021-05-10T10:40:30.000Z | flockos/apis/channels.py | bilmyers/pyflock | b440ffbcd6a18c0d81b81dcdcbae7ae16c025d39 | [
"Apache-2.0"
] | 8 | 2017-03-03T13:16:34.000Z | 2020-07-23T17:59:54.000Z | # coding: utf-8
# python 2 and python 3 compatibility library
from six import iteritems
from ..api_client import call_api
def get_info(token, channel_id, **kwargs):
"""
This method makes a synchronous HTTP request.
:param str token: (required)
:param str channel_id: (required)
:return: response dict
"""
params = locals()
for key, val in iteritems(params['kwargs']):
params[key] = val
del params['kwargs']
resource_path = '/channels.getInfo'.replace('{format}', 'json')
response = call_api(resource_path, params=params)
return response
def get_members(token, channel_id, show_public_profile, **kwargs):
"""
This method makes a synchronous HTTP request.
:param str token: (required)
:param str channel_id: (required)
:param bool show_public_profile: (required)
:return: response dict
"""
params = locals()
for key, val in iteritems(params['kwargs']):
params[key] = val
del params['kwargs']
resource_path = '/channels.getMembers'.replace('{format}', 'json')
response = call_api(resource_path, params=params)
return response
def list(token, **kwargs):
"""
This method makes a synchronous HTTP request.
:param str token: (required)
:return: response dict
"""
params = locals()
for key, val in iteritems(params['kwargs']):
params[key] = val
del params['kwargs']
resource_path = '/channels.list'.replace('{format}', 'json')
response = call_api(resource_path, params=params)
return response
| 24.261538 | 70 | 0.655041 | 192 | 1,577 | 5.270833 | 0.296875 | 0.083004 | 0.047431 | 0.062253 | 0.786561 | 0.786561 | 0.786561 | 0.786561 | 0.786561 | 0.786561 | 0 | 0.002449 | 0.223209 | 1,577 | 64 | 71 | 24.640625 | 0.823673 | 0.298034 | 0 | 0.692308 | 0 | 0 | 0.120825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fc9e3e24c05b5ba684fdcc29a564d41d693692a1 | 89 | py | Python | django-celery/home/admin.py | mrseyfi/django-celery | 2da50b020d903eabb877aed6ffba00c118c62eaf | [
"MIT"
] | null | null | null | django-celery/home/admin.py | mrseyfi/django-celery | 2da50b020d903eabb877aed6ffba00c118c62eaf | [
"MIT"
] | null | null | null | django-celery/home/admin.py | mrseyfi/django-celery | 2da50b020d903eabb877aed6ffba00c118c62eaf | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Number
admin.site.register(Number) | 17.8 | 32 | 0.820225 | 13 | 89 | 5.615385 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11236 | 89 | 5 | 33 | 17.8 | 0.924051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5da38113f75e1dd83be5794caed72f37218b969a | 134 | py | Python | test/__init__.py | lionel-/BeetsPluginStructuredComments | cdd0ae28d0b5ff3db6ac227a65b522be673cadcf | [
"MIT"
] | 3 | 2020-12-23T10:16:51.000Z | 2021-12-23T23:44:00.000Z | test/__init__.py | lionel-/BeetsPluginStructuredComments | cdd0ae28d0b5ff3db6ac227a65b522be673cadcf | [
"MIT"
] | null | null | null | test/__init__.py | lionel-/BeetsPluginStructuredComments | cdd0ae28d0b5ff3db6ac227a65b522be673cadcf | [
"MIT"
] | 1 | 2020-12-30T14:07:18.000Z | 2020-12-30T14:07:18.000Z | # Copyright: Copyright (c) 2020., Michael Toohig
# Author: Michael Toohig <michael dot toohig at gmail>
# License: See LICENSE.txt
| 33.5 | 55 | 0.731343 | 18 | 134 | 5.444444 | 0.666667 | 0.265306 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036036 | 0.171642 | 134 | 3 | 56 | 44.666667 | 0.846847 | 0.940299 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5dcdf39051ce8b4b546a96f2dc55aa51ab5d327d | 114 | py | Python | Hello World.py | Futurist-Forever/playground-for-python | a6f569c5f689dc83ec087b6e00a582123af9f732 | [
"MIT"
] | null | null | null | Hello World.py | Futurist-Forever/playground-for-python | a6f569c5f689dc83ec087b6e00a582123af9f732 | [
"MIT"
] | null | null | null | Hello World.py | Futurist-Forever/playground-for-python | a6f569c5f689dc83ec087b6e00a582123af9f732 | [
"MIT"
] | null | null | null | # playground-for-python
# Program 1: Hello World
print("Hello World") # It prints "Hello World" as Output
| 22.8 | 60 | 0.684211 | 16 | 114 | 4.875 | 0.75 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011111 | 0.210526 | 114 | 4 | 61 | 28.5 | 0.855556 | 0.684211 | 0 | 0 | 0 | 0 | 0.392857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
5ddd0825c6ad1a50b91af7658478c063e2051e6e | 205 | py | Python | manager/models/__init__.py | Exanis/dataset-manager | af2f2d4242417eb14240129ac6312a0ebdfd24ee | [
"MIT"
] | null | null | null | manager/models/__init__.py | Exanis/dataset-manager | af2f2d4242417eb14240129ac6312a0ebdfd24ee | [
"MIT"
] | 5 | 2018-11-22T13:32:17.000Z | 2018-11-22T13:34:39.000Z | manager/models/__init__.py | Exanis/dataset-manager | af2f2d4242417eb14240129ac6312a0ebdfd24ee | [
"MIT"
] | null | null | null | from .datatype import DataType, DataTypeElement, DataTypeOption
from .collection import Collection, CollectionElement, CollectionElementValue
from .export import Export, ExportParam
from .task import Task
| 41 | 77 | 0.853659 | 21 | 205 | 8.333333 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102439 | 205 | 4 | 78 | 51.25 | 0.951087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5dee49d22e2c8b88db1bbb18ac402f1635d8390e | 320 | py | Python | tests/test_parse_params_helpers.py | Dakhnovskiy/pomogator_bot | 9a8b9d5f79b800020d99ffd6034df054d405e434 | [
"Apache-2.0"
] | null | null | null | tests/test_parse_params_helpers.py | Dakhnovskiy/pomogator_bot | 9a8b9d5f79b800020d99ffd6034df054d405e434 | [
"Apache-2.0"
] | null | null | null | tests/test_parse_params_helpers.py | Dakhnovskiy/pomogator_bot | 9a8b9d5f79b800020d99ffd6034df054d405e434 | [
"Apache-2.0"
] | null | null | null | from fixtures import params_weather_forecast
from bot.handlers.parse_params_helpers import parse_params_weather_forecast
def test_parse_params_weather_forecast(params_weather_forecast):
result = parse_params_weather_forecast(params_weather_forecast['param'])
assert result == params_weather_forecast['result']
| 40 | 76 | 0.859375 | 41 | 320 | 6.219512 | 0.365854 | 0.356863 | 0.576471 | 0.305882 | 0.368627 | 0.368627 | 0.368627 | 0 | 0 | 0 | 0 | 0 | 0.084375 | 320 | 7 | 77 | 45.714286 | 0.870307 | 0 | 0 | 0 | 0 | 0 | 0.034375 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f8d65b960c578870b80d01a9a5d044c84b174c02 | 81 | py | Python | Programmers/Lv.1/lack_money.py | kangjunseo/C- | eafdf57a22b3a794d09cab045d6d60c2842ba347 | [
"MIT"
] | 2 | 2021-08-30T12:37:57.000Z | 2021-11-29T05:42:05.000Z | Programmers/Lv.1/lack_money.py | kangjunseo/C- | eafdf57a22b3a794d09cab045d6d60c2842ba347 | [
"MIT"
] | null | null | null | Programmers/Lv.1/lack_money.py | kangjunseo/C- | eafdf57a22b3a794d09cab045d6d60c2842ba347 | [
"MIT"
] | null | null | null | def solution(price, money, count): return max(0,price*count*(count+1)/2 - money)
| 40.5 | 80 | 0.716049 | 14 | 81 | 4.142857 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041096 | 0.098765 | 81 | 1 | 81 | 81 | 0.753425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | false | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
f8f6ae6d2b78ee832987b97f16bfc3b842c327cc | 6,597 | py | Python | tests/networking/arista/test_save_config.py | QualiSystems/cloudshell-networking-arista- | 011ff605244a98bb488fec985bd0e053af9855d0 | [
"Apache-2.0"
] | null | null | null | tests/networking/arista/test_save_config.py | QualiSystems/cloudshell-networking-arista- | 011ff605244a98bb488fec985bd0e053af9855d0 | [
"Apache-2.0"
] | 9 | 2018-04-03T12:02:29.000Z | 2021-07-08T09:07:29.000Z | tests/networking/arista/test_save_config.py | QualiSystems/cloudshell-networking-arista- | 011ff605244a98bb488fec985bd0e053af9855d0 | [
"Apache-2.0"
] | 2 | 2017-02-08T23:52:21.000Z | 2018-07-04T15:33:36.000Z | from mock import MagicMock, patch
from cloudshell.networking.arista.runners.arista_configuration_runner import (
AristaConfigurationRunner,
)
from tests.networking.arista.base_test import (
ENABLE_PROMPT,
VRF_PROMPT,
BaseAristaTestCase,
CliEmulator,
Command,
)
@patch("cloudshell.cli.session.ssh_session.paramiko", MagicMock())
@patch(
"cloudshell.cli.session.ssh_session.SSHSession._clear_buffer",
MagicMock(return_value=""),
)
class TestSaveConfig(BaseAristaTestCase):
def _setUp(self, attrs=None):
super(TestSaveConfig, self)._setUp(attrs)
self.runner = AristaConfigurationRunner(
self.logger, self.resource_config, self.api, self.cli_handler
)
@patch("cloudshell.cli.session.ssh_session.SSHSession._receive_all")
@patch("cloudshell.cli.session.ssh_session.SSHSession.send_line")
def test_save_anonymous(self, send_mock, recv_mock):
self._setUp()
host = "192.168.122.10"
ftp_path = "ftp://{}".format(host)
configuration_type = "running"
emu = CliEmulator(
[
Command(
r"^copy {0} {1}/Arista-{0}-\d+-\d+$".format(
configuration_type, ftp_path
),
"Copy complete\n" "{}".format(ENABLE_PROMPT),
regexp=True,
),
]
)
send_mock.side_effect = emu.send_line
recv_mock.side_effect = emu.receive_all
self.runner.save(ftp_path, configuration_type)
emu.check_calls()
@patch("cloudshell.cli.session.ssh_session.SSHSession._receive_all")
@patch("cloudshell.cli.session.ssh_session.SSHSession.send_line")
def test_save_ftp(self, send_mock, recv_mock):
self._setUp()
user = "user"
password = "password"
host = "192.168.122.10"
ftp_path = "ftp://{}:{}@{}".format(user, password, host)
configuration_type = "running"
emu = CliEmulator(
[
Command(
r"^copy {0} {1}/Arista-{0}-\d+-\d+$".format(
configuration_type, ftp_path
),
"Copy complete\n" "{}".format(ENABLE_PROMPT),
regexp=True,
)
]
)
send_mock.side_effect = emu.send_line
recv_mock.side_effect = emu.receive_all
self.runner.save(ftp_path, configuration_type)
emu.check_calls()
@patch("cloudshell.cli.session.ssh_session.SSHSession._receive_all")
@patch("cloudshell.cli.session.ssh_session.SSHSession.send_line")
def test_save_with_vrf(self, send_mock, recv_mock):
vrf_name = "vrf_name"
self._setUp({"VRF Management Name": vrf_name})
user = "user"
password = "password"
host = "192.168.122.10"
ftp_path = "ftp://{}:{}@{}".format(user, password, host)
configuration_type = "running"
emu = CliEmulator(
[
Command(
"routing-context vrf {}".format(vrf_name),
VRF_PROMPT.format(vrf_name=vrf_name),
),
Command(
r"^copy {0} {1}/Arista-{0}-\d+-\d+$".format(
configuration_type, ftp_path
),
"Copy complete\n" "{}".format(VRF_PROMPT.format(vrf_name=vrf_name)),
regexp=True,
),
Command("routing-context vrf default", ENABLE_PROMPT),
]
)
send_mock.side_effect = emu.send_line
recv_mock.side_effect = emu.receive_all
self.runner.save(ftp_path, configuration_type)
emu.check_calls()
@patch("cloudshell.cli.session.ssh_session.SSHSession._receive_all")
@patch("cloudshell.cli.session.ssh_session.SSHSession.send_line")
def test_save_startup(self, send_mock, recv_mock):
self._setUp()
user = "user"
password = "password"
host = "192.168.122.10"
ftp_path = "ftp://{}:{}@{}".format(user, password, host)
configuration_type = "startup"
emu = CliEmulator(
[
Command(
r"^copy {0} {1}/Arista-{0}-\d+-\d+$".format(
configuration_type, ftp_path
),
"Copy complete\n" "{}".format(ENABLE_PROMPT),
regexp=True,
)
]
)
send_mock.side_effect = emu.send_line
recv_mock.side_effect = emu.receive_all
self.runner.save(ftp_path, configuration_type)
emu.check_calls()
@patch("cloudshell.cli.session.ssh_session.SSHSession._receive_all")
@patch("cloudshell.cli.session.ssh_session.SSHSession.send_line")
def test_fail_to_save(self, send_mock, recv_mock):
self._setUp()
host = "192.168.122.10"
ftp_path = "ftp://{}".format(host)
configuration_type = "running"
emu = CliEmulator(
[
Command(
r"^copy {0} {1}/Arista-{0}-\d+-\d+$".format(
configuration_type, ftp_path
),
"Error\n" "{}".format(ENABLE_PROMPT),
regexp=True,
)
]
)
send_mock.side_effect = emu.send_line
recv_mock.side_effect = emu.receive_all
self.assertRaisesRegexp(
Exception,
"Copy Command failed",
self.runner.save,
ftp_path,
configuration_type,
)
emu.check_calls()
@patch("cloudshell.cli.session.ssh_session.SSHSession._receive_all")
@patch("cloudshell.cli.session.ssh_session.SSHSession.send_line")
def test_save_to_device(self, send_mock, recv_mock):
self._setUp(
{
"Backup Location": "",
"Backup Type": AristaConfigurationRunner.DEFAULT_FILE_SYSTEM,
}
)
path = ""
configuration_type = "running"
emu = CliEmulator(
[
Command(
r"copy {0} flash:/Arista-{0}-\d+-\d+".format(configuration_type),
"Copy complete\n" "{}".format(ENABLE_PROMPT),
regexp=True,
)
]
)
send_mock.side_effect = emu.send_line
recv_mock.side_effect = emu.receive_all
self.runner.save(path, configuration_type)
emu.check_calls()
| 32.497537 | 88 | 0.549947 | 672 | 6,597 | 5.145833 | 0.141369 | 0.08849 | 0.072874 | 0.101215 | 0.784558 | 0.778774 | 0.75882 | 0.711394 | 0.711394 | 0.696645 | 0 | 0.016319 | 0.331211 | 6,597 | 202 | 89 | 32.658416 | 0.767452 | 0 | 0 | 0.574713 | 0 | 0 | 0.212218 | 0.139457 | 0 | 0 | 0 | 0 | 0.005747 | 1 | 0.04023 | false | 0.034483 | 0.017241 | 0 | 0.063218 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
5d3122194d423b3cf500448fe1ac778162db1ff1 | 266 | py | Python | pynocle/depgraph/__init__.py | Totti20/pynocle | 05f781a932bfb4f78c02f3a8f3c5cf6cf6186356 | [
"MIT"
] | null | null | null | pynocle/depgraph/__init__.py | Totti20/pynocle | 05f781a932bfb4f78c02f3a8f3c5cf6cf6186356 | [
"MIT"
] | null | null | null | pynocle/depgraph/__init__.py | Totti20/pynocle | 05f781a932bfb4f78c02f3a8f3c5cf6cf6186356 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from _doc import about_coupling, about_rank
from depbuilder import DepBuilder, DependencyGroup
from formatting import RankGoogleChartFormatter, CouplingGoogleChartFormatter
from rendering import IRenderer, DefaultRenderer, DefaultStyler
| 38 | 78 | 0.849624 | 27 | 266 | 8.259259 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112782 | 266 | 6 | 79 | 44.333333 | 0.944915 | 0.075188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
53cfec3d88179e004a687c0a56aec1569941e444 | 6,862 | py | Python | test/swig/Ceil.py | AyishaR/deepC | 1dc9707ef5ca9000fc13c3da7f1129685a83b494 | [
"Apache-2.0"
] | null | null | null | test/swig/Ceil.py | AyishaR/deepC | 1dc9707ef5ca9000fc13c3da7f1129685a83b494 | [
"Apache-2.0"
] | null | null | null | test/swig/Ceil.py | AyishaR/deepC | 1dc9707ef5ca9000fc13c3da7f1129685a83b494 | [
"Apache-2.0"
] | null | null | null | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License") you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# pylint: disable=invalid-name, unused-argument
#
# This file is part of DNN compiler maintained at
# https://github.com/ai-techsystems/dnnCompiler
import common
import deepC.dnnc as dc
import numpy as np
import unittest
class CeilTest(unittest.TestCase):
def setUp(self):
self.len = 48
self.np_float_a = np.random.randn(self.len).astype(np.float32)
self.dc_float_a = dc.array(list(self.np_float_a))
self.np_double_a = np.random.randn(self.len).astype(np.float64)
self.dc_double_a = dc.array(list(self.np_double_a))
def test_Ceil1D_float (self):
npr = np.ceil(self.np_float_a)
dcr = dc.ceil(self.dc_float_a)
np.testing.assert_allclose(npr, np.array(dcr.data()).astype(np.float32),
rtol=1e-3, atol=1e-3)
def test_Ceil1D_double (self):
npr = np.ceil(self.np_double_a)
dcr = dc.ceil(self.dc_double_a)
np.testing.assert_allclose(npr, np.array(dcr.data()).astype(np.float64),
rtol=1e-3, atol=1e-3)
def test_Ceil2D_float_1 (self):
np_float_a = np.reshape(self.np_float_a, (3,16))
dc_float_a = dc.reshape(self.dc_float_a, (3,16))
npr = np.ceil(np_float_a)
dcr = dc.ceil(dc_float_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float32),
rtol=1e-3, atol=1e-3)
def test_Ceil2D_float_2 (self):
np_float_a = np.reshape(self.np_float_a, (6,8))
dc_float_a = dc.reshape(self.dc_float_a, (6,8))
npr = np.ceil(np_float_a)
dcr = dc.ceil(dc_float_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float32),
rtol=1e-3, atol=1e-3)
def test_Ceil2D_float_3 (self):
np_float_a = np.reshape(self.np_float_a, (12,4))
dc_float_a = dc.reshape(self.dc_float_a, (12,4))
npr = np.ceil(np_float_a)
dcr = dc.ceil(dc_float_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float32),
rtol=1e-3, atol=1e-3)
def test_Ceil2D_double_1 (self):
np_double_a = np.reshape(self.np_double_a, (3,16))
dc_double_a = dc.reshape(self.dc_double_a, (3,16))
npr = np.ceil(np_double_a)
dcr = dc.ceil(dc_double_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float64),
rtol=1e-3, atol=1e-3)
def test_Ceil2D_double_2 (self):
np_double_a = np.reshape(self.np_double_a, (6,8))
dc_double_a = dc.reshape(self.dc_double_a, (6,8))
npr = np.ceil(np_double_a)
dcr = dc.ceil(dc_double_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float64),
rtol=1e-3, atol=1e-3)
def test_Ceil2D_double_3 (self):
np_double_a = np.reshape(self.np_double_a, (12,4))
dc_double_a = dc.reshape(self.dc_double_a, (12,4))
npr = np.ceil(np_double_a)
dcr = dc.ceil(dc_double_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float64),
rtol=1e-3, atol=1e-3)
def test_Ceil3D_float_1 (self):
np_float_a = np.reshape(self.np_float_a, (4,4,3))
dc_float_a = dc.reshape(self.dc_float_a, (4,4,3))
npr = np.ceil(np_float_a)
dcr = dc.ceil(dc_float_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float32),
rtol=1e-3, atol=1e-3)
def test_Ceil3D_float_2 (self):
np_float_a = np.reshape(self.np_float_a, (8,2,3))
dc_float_a = dc.reshape(self.dc_float_a, (8,2,3))
npr = np.ceil(np_float_a)
dcr = dc.ceil(dc_float_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float32),
rtol=1e-3, atol=1e-3)
def test_Ceil3D_float_3 (self):
np_float_a = np.reshape(self.np_float_a, (2,4,6))
dc_float_a = dc.reshape(self.dc_float_a, (2,4,6))
npr = np.ceil(np_float_a)
dcr = dc.ceil(dc_float_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float32),
rtol=1e-3, atol=1e-3)
def test_Ceil3D_double_1 (self):
np_double_a = np.reshape(self.np_double_a, (4,4,3))
dc_double_a = dc.reshape(self.dc_double_a, (4,4,3))
npr = np.ceil(np_double_a)
dcr = dc.ceil(dc_double_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float64),
rtol=1e-3, atol=1e-3)
def test_Ceil3D_double_2 (self):
np_double_a = np.reshape(self.np_double_a, (8,2,3))
dc_double_a = dc.reshape(self.dc_double_a, (8,2,3))
npr = np.ceil(np_double_a)
dcr = dc.ceil(dc_double_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float64),
rtol=1e-3, atol=1e-3)
def test_Ceil3D_double_3 (self):
np_double_a = np.reshape(self.np_double_a, (2,4,6))
dc_double_a = dc.reshape(self.dc_double_a, (2,4,6))
npr = np.ceil(np_double_a)
dcr = dc.ceil(dc_double_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float64),
rtol=1e-3, atol=1e-3)
def test_Ceil4D_float (self):
np_float_a = np.reshape(self.np_float_a, (4,2,2,3))
dc_float_a = dc.reshape(self.dc_float_a, (4,2,2,3))
npr = np.ceil(np_float_a)
dcr = dc.ceil(dc_float_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float32),
rtol=1e-3, atol=1e-3)
def test_Ceil4D_double (self):
np_double_a = np.reshape(self.np_double_a, (4,2,2,3))
dc_double_a = dc.reshape(self.dc_double_a, (4,2,2,3))
npr = np.ceil(np_double_a)
dcr = dc.ceil(dc_double_a)
np.testing.assert_allclose(npr.flatten(), np.array(dcr.data()).astype(np.float64),
rtol=1e-3, atol=1e-3)
def tearDown(self):
return "test finished"
if __name__ == '__main__':
unittest.main()
| 40.845238 | 90 | 0.638881 | 1,139 | 6,862 | 3.624232 | 0.124671 | 0.068314 | 0.046512 | 0.049419 | 0.759932 | 0.751938 | 0.724079 | 0.720203 | 0.700097 | 0.638566 | 0 | 0.04087 | 0.222676 | 6,862 | 167 | 91 | 41.08982 | 0.733033 | 0.129846 | 0 | 0.471545 | 0 | 0 | 0.003529 | 0 | 0 | 0 | 0 | 0 | 0.130081 | 1 | 0.146341 | false | 0 | 0.03252 | 0.00813 | 0.195122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
53eec8cc11efb5d6965b6da361d30f6823aadc64 | 143 | py | Python | exercicios/exercicio113.py | Helton-Rubens/Python-3 | eb6d5ee71bcb2a2a80de4eaea942bd0c41d846b7 | [
"MIT"
] | null | null | null | exercicios/exercicio113.py | Helton-Rubens/Python-3 | eb6d5ee71bcb2a2a80de4eaea942bd0c41d846b7 | [
"MIT"
] | null | null | null | exercicios/exercicio113.py | Helton-Rubens/Python-3 | eb6d5ee71bcb2a2a80de4eaea942bd0c41d846b7 | [
"MIT"
] | null | null | null | from ex112.UtilidadesdeDev import moeda
from ex112.UtilidadesdeDev import dado
p = dado.leiaValor('Digite um preço: R$')
moeda.resumo(p, 5, 2)
| 28.6 | 41 | 0.776224 | 22 | 143 | 5.045455 | 0.681818 | 0.162162 | 0.432432 | 0.540541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063492 | 0.118881 | 143 | 4 | 42 | 35.75 | 0.81746 | 0 | 0 | 0 | 0 | 0 | 0.132867 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
54e7294726ab7cfe2d663d202a30caec78f84f36 | 49,294 | py | Python | packages/python/plotly/plotly/graph_objs/_image.py | adehad/plotly.py | bca292530c400c61e8b7f8a6571262a9dde43ee3 | [
"MIT"
] | null | null | null | packages/python/plotly/plotly/graph_objs/_image.py | adehad/plotly.py | bca292530c400c61e8b7f8a6571262a9dde43ee3 | [
"MIT"
] | null | null | null | packages/python/plotly/plotly/graph_objs/_image.py | adehad/plotly.py | bca292530c400c61e8b7f8a6571262a9dde43ee3 | [
"MIT"
] | null | null | null | from plotly.basedatatypes import BaseTraceType as _BaseTraceType
import copy as _copy
class Image(_BaseTraceType):
# class properties
# --------------------
_parent_path_str = ""
_path_str = "image"
_valid_props = {
"colormodel",
"customdata",
"customdatasrc",
"dx",
"dy",
"hoverinfo",
"hoverinfosrc",
"hoverlabel",
"hovertemplate",
"hovertemplatesrc",
"hovertext",
"hovertextsrc",
"ids",
"idssrc",
"meta",
"metasrc",
"name",
"opacity",
"source",
"stream",
"text",
"textsrc",
"type",
"uid",
"uirevision",
"visible",
"x0",
"xaxis",
"y0",
"yaxis",
"z",
"zmax",
"zmin",
"zsmooth",
"zsrc",
}
# colormodel
# ----------
@property
def colormodel(self):
"""
Color model used to map the numerical color components
described in `z` into colors. If `source` is specified, this
attribute will be set to `rgba256` otherwise it defaults to
`rgb`.
The 'colormodel' property is an enumeration that may be specified as:
- One of the following enumeration values:
['rgb', 'rgba', 'rgba256', 'hsl', 'hsla']
Returns
-------
Any
"""
return self["colormodel"]
@colormodel.setter
def colormodel(self, val):
self["colormodel"] = val
# customdata
# ----------
@property
def customdata(self):
"""
Assigns extra data each datum. This may be useful when
listening to hover, click and selection events. Note that,
"scatter" traces also appends customdata items in the markers
DOM elements
The 'customdata' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["customdata"]
@customdata.setter
def customdata(self, val):
self["customdata"] = val
# customdatasrc
# -------------
@property
def customdatasrc(self):
"""
Sets the source reference on Chart Studio Cloud for customdata
.
The 'customdatasrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["customdatasrc"]
@customdatasrc.setter
def customdatasrc(self, val):
self["customdatasrc"] = val
# dx
# --
@property
def dx(self):
"""
Set the pixel's horizontal size.
The 'dx' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["dx"]
@dx.setter
def dx(self, val):
self["dx"] = val
# dy
# --
@property
def dy(self):
"""
Set the pixel's vertical size
The 'dy' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["dy"]
@dy.setter
def dy(self, val):
self["dy"] = val
# hoverinfo
# ---------
@property
def hoverinfo(self):
"""
Determines which trace information appear on hover. If `none`
or `skip` are set, no information is displayed upon hovering.
But, if `none` is set, click and hover events are still fired.
The 'hoverinfo' property is a flaglist and may be specified
as a string containing:
- Any combination of ['x', 'y', 'z', 'color', 'name', 'text'] joined with '+' characters
(e.g. 'x+y')
OR exactly one of ['all', 'none', 'skip'] (e.g. 'skip')
- A list or array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["hoverinfo"]
@hoverinfo.setter
def hoverinfo(self, val):
self["hoverinfo"] = val
# hoverinfosrc
# ------------
@property
def hoverinfosrc(self):
"""
Sets the source reference on Chart Studio Cloud for hoverinfo
.
The 'hoverinfosrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["hoverinfosrc"]
@hoverinfosrc.setter
def hoverinfosrc(self, val):
self["hoverinfosrc"] = val
# hoverlabel
# ----------
@property
def hoverlabel(self):
"""
The 'hoverlabel' property is an instance of Hoverlabel
that may be specified as:
- An instance of :class:`plotly.graph_objs.image.Hoverlabel`
- A dict of string/value properties that will be passed
to the Hoverlabel constructor
Supported dict properties:
align
Sets the horizontal alignment of the text
content within hover label box. Has an effect
only if the hover label text spans more two or
more lines
alignsrc
Sets the source reference on Chart Studio Cloud
for align .
bgcolor
Sets the background color of the hover labels
for this trace
bgcolorsrc
Sets the source reference on Chart Studio Cloud
for bgcolor .
bordercolor
Sets the border color of the hover labels for
this trace.
bordercolorsrc
Sets the source reference on Chart Studio Cloud
for bordercolor .
font
Sets the font used in hover labels.
namelength
Sets the default length (in number of
characters) of the trace name in the hover
labels for all traces. -1 shows the whole name
regardless of length. 0-3 shows the first 0-3
characters, and an integer >3 will show the
whole name if it is less than that many
characters, but if it is longer, will truncate
to `namelength - 3` characters and add an
ellipsis.
namelengthsrc
Sets the source reference on Chart Studio Cloud
for namelength .
Returns
-------
plotly.graph_objs.image.Hoverlabel
"""
return self["hoverlabel"]
@hoverlabel.setter
def hoverlabel(self, val):
self["hoverlabel"] = val
# hovertemplate
# -------------
@property
def hovertemplate(self):
"""
Template string used for rendering the information that appear
on hover box. Note that this will override `hoverinfo`.
Variables are inserted using %{variable}, for example "y: %{y}"
as well as %{xother}, {%_xother}, {%_xother_}, {%xother_}. When
showing info for several points, "xother" will be added to
those with different x positions from the first point. An
underscore before or after "(x|y)other" will add a space on
that side, only when this field is shown. Numbers are formatted
using d3-format's syntax %{variable:d3-format}, for example
"Price: %{y:$.2f}". https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format for details on
the formatting syntax. Dates are formatted using d3-time-
format's syntax %{variable|d3-time-format}, for example "Day:
%{2019-01-01|%A}". https://github.com/d3/d3-time-
format#locale_format for details on the date formatting syntax.
The variables available in `hovertemplate` are the ones emitted
as event data described at this link
https://plotly.com/javascript/plotlyjs-events/#event-data.
Additionally, every attributes that can be specified per-point
(the ones that are `arrayOk: true`) are available. variables
`z`, `color` and `colormodel`. Anything contained in tag
`<extra>` is displayed in the secondary box, for example
"<extra>{fullData.name}</extra>". To hide the secondary box
completely, use an empty tag `<extra></extra>`.
The 'hovertemplate' property is a string and must be specified as:
- A string
- A number that will be converted to a string
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
str|numpy.ndarray
"""
return self["hovertemplate"]
@hovertemplate.setter
def hovertemplate(self, val):
self["hovertemplate"] = val
# hovertemplatesrc
# ----------------
@property
def hovertemplatesrc(self):
"""
Sets the source reference on Chart Studio Cloud for
hovertemplate .
The 'hovertemplatesrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["hovertemplatesrc"]
@hovertemplatesrc.setter
def hovertemplatesrc(self, val):
self["hovertemplatesrc"] = val
# hovertext
# ---------
@property
def hovertext(self):
"""
Same as `text`.
The 'hovertext' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["hovertext"]
@hovertext.setter
def hovertext(self, val):
self["hovertext"] = val
# hovertextsrc
# ------------
@property
def hovertextsrc(self):
"""
Sets the source reference on Chart Studio Cloud for hovertext
.
The 'hovertextsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["hovertextsrc"]
@hovertextsrc.setter
def hovertextsrc(self, val):
self["hovertextsrc"] = val
# ids
# ---
@property
def ids(self):
"""
Assigns id labels to each datum. These ids for object constancy
of data points during animation. Should be an array of strings,
not numbers or any other type.
The 'ids' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["ids"]
@ids.setter
def ids(self, val):
self["ids"] = val
# idssrc
# ------
@property
def idssrc(self):
"""
Sets the source reference on Chart Studio Cloud for ids .
The 'idssrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["idssrc"]
@idssrc.setter
def idssrc(self, val):
self["idssrc"] = val
# meta
# ----
@property
def meta(self):
"""
Assigns extra meta information associated with this trace that
can be used in various text attributes. Attributes such as
trace `name`, graph, axis and colorbar `title.text`, annotation
`text` `rangeselector`, `updatemenues` and `sliders` `label`
text all support `meta`. To access the trace `meta` values in
an attribute in the same trace, simply use `%{meta[i]}` where
`i` is the index or key of the `meta` item in question. To
access trace `meta` in layout attributes, use
`%{data[n[.meta[i]}` where `i` is the index or key of the
`meta` and `n` is the trace index.
The 'meta' property accepts values of any type
Returns
-------
Any|numpy.ndarray
"""
return self["meta"]
@meta.setter
def meta(self, val):
self["meta"] = val
# metasrc
# -------
@property
def metasrc(self):
"""
Sets the source reference on Chart Studio Cloud for meta .
The 'metasrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["metasrc"]
@metasrc.setter
def metasrc(self, val):
self["metasrc"] = val
# name
# ----
@property
def name(self):
"""
Sets the trace name. The trace name appear as the legend item
and on hover.
The 'name' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["name"]
@name.setter
def name(self, val):
self["name"] = val
# opacity
# -------
@property
def opacity(self):
"""
Sets the opacity of the trace.
The 'opacity' property is a number and may be specified as:
- An int or float in the interval [0, 1]
Returns
-------
int|float
"""
return self["opacity"]
@opacity.setter
def opacity(self, val):
self["opacity"] = val
# source
# ------
@property
def source(self):
"""
Specifies the data URI of the image to be visualized. The URI
consists of "data:image/[<media subtype>][;base64],<data>"
The 'source' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["source"]
@source.setter
def source(self, val):
self["source"] = val
# stream
# ------
@property
def stream(self):
"""
The 'stream' property is an instance of Stream
that may be specified as:
- An instance of :class:`plotly.graph_objs.image.Stream`
- A dict of string/value properties that will be passed
to the Stream constructor
Supported dict properties:
maxpoints
Sets the maximum number of points to keep on
the plots from an incoming stream. If
`maxpoints` is set to 50, only the newest 50
points will be displayed on the plot.
token
The stream id number links a data trace on a
plot with a stream. See https://chart-
studio.plotly.com/settings for more details.
Returns
-------
plotly.graph_objs.image.Stream
"""
return self["stream"]
@stream.setter
def stream(self, val):
self["stream"] = val
# text
# ----
@property
def text(self):
"""
Sets the text elements associated with each z value.
The 'text' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["text"]
@text.setter
def text(self, val):
self["text"] = val
# textsrc
# -------
@property
def textsrc(self):
"""
Sets the source reference on Chart Studio Cloud for text .
The 'textsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["textsrc"]
@textsrc.setter
def textsrc(self, val):
self["textsrc"] = val
# uid
# ---
@property
def uid(self):
"""
Assign an id to this trace, Use this to provide object
constancy between traces during animations and transitions.
The 'uid' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["uid"]
@uid.setter
def uid(self, val):
self["uid"] = val
# uirevision
# ----------
@property
def uirevision(self):
"""
Controls persistence of some user-driven changes to the trace:
`constraintrange` in `parcoords` traces, as well as some
`editable: true` modifications such as `name` and
`colorbar.title`. Defaults to `layout.uirevision`. Note that
other user-driven trace attribute changes are controlled by
`layout` attributes: `trace.visible` is controlled by
`layout.legend.uirevision`, `selectedpoints` is controlled by
`layout.selectionrevision`, and `colorbar.(x|y)` (accessible
with `config: {editable: true}`) is controlled by
`layout.editrevision`. Trace changes are tracked by `uid`,
which only falls back on trace index if no `uid` is provided.
So if your app can add/remove traces before the end of the
`data` array, such that the same trace has a different index,
you can still preserve user-driven changes if you give each
trace a `uid` that stays with it as it moves.
The 'uirevision' property accepts values of any type
Returns
-------
Any
"""
return self["uirevision"]
@uirevision.setter
def uirevision(self, val):
self["uirevision"] = val
# visible
# -------
@property
def visible(self):
"""
Determines whether or not this trace is visible. If
"legendonly", the trace is not drawn, but can appear as a
legend item (provided that the legend itself is visible).
The 'visible' property is an enumeration that may be specified as:
- One of the following enumeration values:
[True, False, 'legendonly']
Returns
-------
Any
"""
return self["visible"]
@visible.setter
def visible(self, val):
self["visible"] = val
# x0
# --
@property
def x0(self):
"""
Set the image's x position.
The 'x0' property accepts values of any type
Returns
-------
Any
"""
return self["x0"]
@x0.setter
def x0(self, val):
self["x0"] = val
# xaxis
# -----
@property
def xaxis(self):
"""
Sets a reference between this trace's x coordinates and a 2D
cartesian x axis. If "x" (the default value), the x coordinates
refer to `layout.xaxis`. If "x2", the x coordinates refer to
`layout.xaxis2`, and so on.
The 'xaxis' property is an identifier of a particular
subplot, of type 'x', that may be specified as the string 'x'
optionally followed by an integer >= 1
(e.g. 'x', 'x1', 'x2', 'x3', etc.)
Returns
-------
str
"""
return self["xaxis"]
@xaxis.setter
def xaxis(self, val):
self["xaxis"] = val
# y0
# --
@property
def y0(self):
"""
Set the image's y position.
The 'y0' property accepts values of any type
Returns
-------
Any
"""
return self["y0"]
@y0.setter
def y0(self, val):
self["y0"] = val
# yaxis
# -----
@property
def yaxis(self):
"""
Sets a reference between this trace's y coordinates and a 2D
cartesian y axis. If "y" (the default value), the y coordinates
refer to `layout.yaxis`. If "y2", the y coordinates refer to
`layout.yaxis2`, and so on.
The 'yaxis' property is an identifier of a particular
subplot, of type 'y', that may be specified as the string 'y'
optionally followed by an integer >= 1
(e.g. 'y', 'y1', 'y2', 'y3', etc.)
Returns
-------
str
"""
return self["yaxis"]
@yaxis.setter
def yaxis(self, val):
self["yaxis"] = val
# z
# -
@property
def z(self):
"""
A 2-dimensional array in which each element is an array of 3 or
4 numbers representing a color.
The 'z' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["z"]
@z.setter
def z(self, val):
self["z"] = val
# zmax
# ----
@property
def zmax(self):
"""
Array defining the higher bound for each color component. Note
that the default value will depend on the colormodel. For the
`rgb` colormodel, it is [255, 255, 255]. For the `rgba`
colormodel, it is [255, 255, 255, 1]. For the `rgba256`
colormodel, it is [255, 255, 255, 255]. For the `hsl`
colormodel, it is [360, 100, 100]. For the `hsla` colormodel,
it is [360, 100, 100, 1].
The 'zmax' property is an info array that may be specified as:
* a list or tuple of 4 elements where:
(0) The 'zmax[0]' property is a number and may be specified as:
- An int or float
(1) The 'zmax[1]' property is a number and may be specified as:
- An int or float
(2) The 'zmax[2]' property is a number and may be specified as:
- An int or float
(3) The 'zmax[3]' property is a number and may be specified as:
- An int or float
Returns
-------
list
"""
return self["zmax"]
@zmax.setter
def zmax(self, val):
self["zmax"] = val
# zmin
# ----
@property
def zmin(self):
"""
Array defining the lower bound for each color component. Note
that the default value will depend on the colormodel. For the
`rgb` colormodel, it is [0, 0, 0]. For the `rgba` colormodel,
it is [0, 0, 0, 0]. For the `rgba256` colormodel, it is [0, 0,
0, 0]. For the `hsl` colormodel, it is [0, 0, 0]. For the
`hsla` colormodel, it is [0, 0, 0, 0].
The 'zmin' property is an info array that may be specified as:
* a list or tuple of 4 elements where:
(0) The 'zmin[0]' property is a number and may be specified as:
- An int or float
(1) The 'zmin[1]' property is a number and may be specified as:
- An int or float
(2) The 'zmin[2]' property is a number and may be specified as:
- An int or float
(3) The 'zmin[3]' property is a number and may be specified as:
- An int or float
Returns
-------
list
"""
return self["zmin"]
@zmin.setter
def zmin(self, val):
self["zmin"] = val
# zsmooth
# -------
@property
def zsmooth(self):
"""
Picks a smoothing algorithm used to smooth `z` data. This only
applies for image traces that use the `source` attribute.
The 'zsmooth' property is an enumeration that may be specified as:
- One of the following enumeration values:
['fast', False]
Returns
-------
Any
"""
return self["zsmooth"]
@zsmooth.setter
def zsmooth(self, val):
self["zsmooth"] = val
# zsrc
# ----
@property
def zsrc(self):
"""
Sets the source reference on Chart Studio Cloud for z .
The 'zsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["zsrc"]
@zsrc.setter
def zsrc(self, val):
self["zsrc"] = val
# type
# ----
@property
def type(self):
return self._props["type"]
# Self properties description
# ---------------------------
@property
def _prop_descriptions(self):
return """\
colormodel
Color model used to map the numerical color components
described in `z` into colors. If `source` is specified,
this attribute will be set to `rgba256` otherwise it
defaults to `rgb`.
customdata
Assigns extra data each datum. This may be useful when
listening to hover, click and selection events. Note
that, "scatter" traces also appends customdata items in
the markers DOM elements
customdatasrc
Sets the source reference on Chart Studio Cloud for
customdata .
dx
Set the pixel's horizontal size.
dy
Set the pixel's vertical size
hoverinfo
Determines which trace information appear on hover. If
`none` or `skip` are set, no information is displayed
upon hovering. But, if `none` is set, click and hover
events are still fired.
hoverinfosrc
Sets the source reference on Chart Studio Cloud for
hoverinfo .
hoverlabel
:class:`plotly.graph_objects.image.Hoverlabel` instance
or dict with compatible properties
hovertemplate
Template string used for rendering the information that
appear on hover box. Note that this will override
`hoverinfo`. Variables are inserted using %{variable},
for example "y: %{y}" as well as %{xother}, {%_xother},
{%_xother_}, {%xother_}. When showing info for several
points, "xother" will be added to those with different
x positions from the first point. An underscore before
or after "(x|y)other" will add a space on that side,
only when this field is shown. Numbers are formatted
using d3-format's syntax %{variable:d3-format}, for
example "Price: %{y:$.2f}".
https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format for
details on the formatting syntax. Dates are formatted
using d3-time-format's syntax %{variable|d3-time-
format}, for example "Day: %{2019-01-01|%A}".
https://github.com/d3/d3-time-format#locale_format for
details on the date formatting syntax. The variables
available in `hovertemplate` are the ones emitted as
event data described at this link
https://plotly.com/javascript/plotlyjs-events/#event-
data. Additionally, every attributes that can be
specified per-point (the ones that are `arrayOk: true`)
are available. variables `z`, `color` and `colormodel`.
Anything contained in tag `<extra>` is displayed in the
secondary box, for example
"<extra>{fullData.name}</extra>". To hide the secondary
box completely, use an empty tag `<extra></extra>`.
hovertemplatesrc
Sets the source reference on Chart Studio Cloud for
hovertemplate .
hovertext
Same as `text`.
hovertextsrc
Sets the source reference on Chart Studio Cloud for
hovertext .
ids
Assigns id labels to each datum. These ids for object
constancy of data points during animation. Should be an
array of strings, not numbers or any other type.
idssrc
Sets the source reference on Chart Studio Cloud for
ids .
meta
Assigns extra meta information associated with this
trace that can be used in various text attributes.
Attributes such as trace `name`, graph, axis and
colorbar `title.text`, annotation `text`
`rangeselector`, `updatemenues` and `sliders` `label`
text all support `meta`. To access the trace `meta`
values in an attribute in the same trace, simply use
`%{meta[i]}` where `i` is the index or key of the
`meta` item in question. To access trace `meta` in
layout attributes, use `%{data[n[.meta[i]}` where `i`
is the index or key of the `meta` and `n` is the trace
index.
metasrc
Sets the source reference on Chart Studio Cloud for
meta .
name
Sets the trace name. The trace name appear as the
legend item and on hover.
opacity
Sets the opacity of the trace.
source
Specifies the data URI of the image to be visualized.
The URI consists of "data:image/[<media
subtype>][;base64],<data>"
stream
:class:`plotly.graph_objects.image.Stream` instance or
dict with compatible properties
text
Sets the text elements associated with each z value.
textsrc
Sets the source reference on Chart Studio Cloud for
text .
uid
Assign an id to this trace, Use this to provide object
constancy between traces during animations and
transitions.
uirevision
Controls persistence of some user-driven changes to the
trace: `constraintrange` in `parcoords` traces, as well
as some `editable: true` modifications such as `name`
and `colorbar.title`. Defaults to `layout.uirevision`.
Note that other user-driven trace attribute changes are
controlled by `layout` attributes: `trace.visible` is
controlled by `layout.legend.uirevision`,
`selectedpoints` is controlled by
`layout.selectionrevision`, and `colorbar.(x|y)`
(accessible with `config: {editable: true}`) is
controlled by `layout.editrevision`. Trace changes are
tracked by `uid`, which only falls back on trace index
if no `uid` is provided. So if your app can add/remove
traces before the end of the `data` array, such that
the same trace has a different index, you can still
preserve user-driven changes if you give each trace a
`uid` that stays with it as it moves.
visible
Determines whether or not this trace is visible. If
"legendonly", the trace is not drawn, but can appear as
a legend item (provided that the legend itself is
visible).
x0
Set the image's x position.
xaxis
Sets a reference between this trace's x coordinates and
a 2D cartesian x axis. If "x" (the default value), the
x coordinates refer to `layout.xaxis`. If "x2", the x
coordinates refer to `layout.xaxis2`, and so on.
y0
Set the image's y position.
yaxis
Sets a reference between this trace's y coordinates and
a 2D cartesian y axis. If "y" (the default value), the
y coordinates refer to `layout.yaxis`. If "y2", the y
coordinates refer to `layout.yaxis2`, and so on.
z
A 2-dimensional array in which each element is an array
of 3 or 4 numbers representing a color.
zmax
Array defining the higher bound for each color
component. Note that the default value will depend on
the colormodel. For the `rgb` colormodel, it is [255,
255, 255]. For the `rgba` colormodel, it is [255, 255,
255, 1]. For the `rgba256` colormodel, it is [255, 255,
255, 255]. For the `hsl` colormodel, it is [360, 100,
100]. For the `hsla` colormodel, it is [360, 100, 100,
1].
zmin
Array defining the lower bound for each color
component. Note that the default value will depend on
the colormodel. For the `rgb` colormodel, it is [0, 0,
0]. For the `rgba` colormodel, it is [0, 0, 0, 0]. For
the `rgba256` colormodel, it is [0, 0, 0, 0]. For the
`hsl` colormodel, it is [0, 0, 0]. For the `hsla`
colormodel, it is [0, 0, 0, 0].
zsmooth
Picks a smoothing algorithm used to smooth `z` data.
This only applies for image traces that use the
`source` attribute.
zsrc
Sets the source reference on Chart Studio Cloud for z
.
"""
def __init__(
self,
arg=None,
colormodel=None,
customdata=None,
customdatasrc=None,
dx=None,
dy=None,
hoverinfo=None,
hoverinfosrc=None,
hoverlabel=None,
hovertemplate=None,
hovertemplatesrc=None,
hovertext=None,
hovertextsrc=None,
ids=None,
idssrc=None,
meta=None,
metasrc=None,
name=None,
opacity=None,
source=None,
stream=None,
text=None,
textsrc=None,
uid=None,
uirevision=None,
visible=None,
x0=None,
xaxis=None,
y0=None,
yaxis=None,
z=None,
zmax=None,
zmin=None,
zsmooth=None,
zsrc=None,
**kwargs
):
"""
Construct a new Image object
Display an image, i.e. data on a 2D regular raster. By default,
when an image is displayed in a subplot, its y axis will be
reversed (ie. `autorange: 'reversed'`), constrained to the
domain (ie. `constrain: 'domain'`) and it will have the same
scale as its x axis (ie. `scaleanchor: 'x,`) in order for
pixels to be rendered as squares.
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of :class:`plotly.graph_objs.Image`
colormodel
Color model used to map the numerical color components
described in `z` into colors. If `source` is specified,
this attribute will be set to `rgba256` otherwise it
defaults to `rgb`.
customdata
Assigns extra data each datum. This may be useful when
listening to hover, click and selection events. Note
that, "scatter" traces also appends customdata items in
the markers DOM elements
customdatasrc
Sets the source reference on Chart Studio Cloud for
customdata .
dx
Set the pixel's horizontal size.
dy
Set the pixel's vertical size
hoverinfo
Determines which trace information appear on hover. If
`none` or `skip` are set, no information is displayed
upon hovering. But, if `none` is set, click and hover
events are still fired.
hoverinfosrc
Sets the source reference on Chart Studio Cloud for
hoverinfo .
hoverlabel
:class:`plotly.graph_objects.image.Hoverlabel` instance
or dict with compatible properties
hovertemplate
Template string used for rendering the information that
appear on hover box. Note that this will override
`hoverinfo`. Variables are inserted using %{variable},
for example "y: %{y}" as well as %{xother}, {%_xother},
{%_xother_}, {%xother_}. When showing info for several
points, "xother" will be added to those with different
x positions from the first point. An underscore before
or after "(x|y)other" will add a space on that side,
only when this field is shown. Numbers are formatted
using d3-format's syntax %{variable:d3-format}, for
example "Price: %{y:$.2f}".
https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format for
details on the formatting syntax. Dates are formatted
using d3-time-format's syntax %{variable|d3-time-
format}, for example "Day: %{2019-01-01|%A}".
https://github.com/d3/d3-time-format#locale_format for
details on the date formatting syntax. The variables
available in `hovertemplate` are the ones emitted as
event data described at this link
https://plotly.com/javascript/plotlyjs-events/#event-
data. Additionally, every attributes that can be
specified per-point (the ones that are `arrayOk: true`)
are available. variables `z`, `color` and `colormodel`.
Anything contained in tag `<extra>` is displayed in the
secondary box, for example
"<extra>{fullData.name}</extra>". To hide the secondary
box completely, use an empty tag `<extra></extra>`.
hovertemplatesrc
Sets the source reference on Chart Studio Cloud for
hovertemplate .
hovertext
Same as `text`.
hovertextsrc
Sets the source reference on Chart Studio Cloud for
hovertext .
ids
Assigns id labels to each datum. These ids for object
constancy of data points during animation. Should be an
array of strings, not numbers or any other type.
idssrc
Sets the source reference on Chart Studio Cloud for
ids .
meta
Assigns extra meta information associated with this
trace that can be used in various text attributes.
Attributes such as trace `name`, graph, axis and
colorbar `title.text`, annotation `text`
`rangeselector`, `updatemenues` and `sliders` `label`
text all support `meta`. To access the trace `meta`
values in an attribute in the same trace, simply use
`%{meta[i]}` where `i` is the index or key of the
`meta` item in question. To access trace `meta` in
layout attributes, use `%{data[n[.meta[i]}` where `i`
is the index or key of the `meta` and `n` is the trace
index.
metasrc
Sets the source reference on Chart Studio Cloud for
meta .
name
Sets the trace name. The trace name appear as the
legend item and on hover.
opacity
Sets the opacity of the trace.
source
Specifies the data URI of the image to be visualized.
The URI consists of "data:image/[<media
subtype>][;base64],<data>"
stream
:class:`plotly.graph_objects.image.Stream` instance or
dict with compatible properties
text
Sets the text elements associated with each z value.
textsrc
Sets the source reference on Chart Studio Cloud for
text .
uid
Assign an id to this trace, Use this to provide object
constancy between traces during animations and
transitions.
uirevision
Controls persistence of some user-driven changes to the
trace: `constraintrange` in `parcoords` traces, as well
as some `editable: true` modifications such as `name`
and `colorbar.title`. Defaults to `layout.uirevision`.
Note that other user-driven trace attribute changes are
controlled by `layout` attributes: `trace.visible` is
controlled by `layout.legend.uirevision`,
`selectedpoints` is controlled by
`layout.selectionrevision`, and `colorbar.(x|y)`
(accessible with `config: {editable: true}`) is
controlled by `layout.editrevision`. Trace changes are
tracked by `uid`, which only falls back on trace index
if no `uid` is provided. So if your app can add/remove
traces before the end of the `data` array, such that
the same trace has a different index, you can still
preserve user-driven changes if you give each trace a
`uid` that stays with it as it moves.
visible
Determines whether or not this trace is visible. If
"legendonly", the trace is not drawn, but can appear as
a legend item (provided that the legend itself is
visible).
x0
Set the image's x position.
xaxis
Sets a reference between this trace's x coordinates and
a 2D cartesian x axis. If "x" (the default value), the
x coordinates refer to `layout.xaxis`. If "x2", the x
coordinates refer to `layout.xaxis2`, and so on.
y0
Set the image's y position.
yaxis
Sets a reference between this trace's y coordinates and
a 2D cartesian y axis. If "y" (the default value), the
y coordinates refer to `layout.yaxis`. If "y2", the y
coordinates refer to `layout.yaxis2`, and so on.
z
A 2-dimensional array in which each element is an array
of 3 or 4 numbers representing a color.
zmax
Array defining the higher bound for each color
component. Note that the default value will depend on
the colormodel. For the `rgb` colormodel, it is [255,
255, 255]. For the `rgba` colormodel, it is [255, 255,
255, 1]. For the `rgba256` colormodel, it is [255, 255,
255, 255]. For the `hsl` colormodel, it is [360, 100,
100]. For the `hsla` colormodel, it is [360, 100, 100,
1].
zmin
Array defining the lower bound for each color
component. Note that the default value will depend on
the colormodel. For the `rgb` colormodel, it is [0, 0,
0]. For the `rgba` colormodel, it is [0, 0, 0, 0]. For
the `rgba256` colormodel, it is [0, 0, 0, 0]. For the
`hsl` colormodel, it is [0, 0, 0]. For the `hsla`
colormodel, it is [0, 0, 0, 0].
zsmooth
Picks a smoothing algorithm used to smooth `z` data.
This only applies for image traces that use the
`source` attribute.
zsrc
Sets the source reference on Chart Studio Cloud for z
.
Returns
-------
Image
"""
super(Image, self).__init__("image")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
# Validate arg
# ------------
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError(
"""\
The first argument to the plotly.graph_objs.Image
constructor must be a dict or
an instance of :class:`plotly.graph_objs.Image`"""
)
# Handle skip_invalid
# -------------------
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
# Populate data dict with properties
# ----------------------------------
_v = arg.pop("colormodel", None)
_v = colormodel if colormodel is not None else _v
if _v is not None:
self["colormodel"] = _v
_v = arg.pop("customdata", None)
_v = customdata if customdata is not None else _v
if _v is not None:
self["customdata"] = _v
_v = arg.pop("customdatasrc", None)
_v = customdatasrc if customdatasrc is not None else _v
if _v is not None:
self["customdatasrc"] = _v
_v = arg.pop("dx", None)
_v = dx if dx is not None else _v
if _v is not None:
self["dx"] = _v
_v = arg.pop("dy", None)
_v = dy if dy is not None else _v
if _v is not None:
self["dy"] = _v
_v = arg.pop("hoverinfo", None)
_v = hoverinfo if hoverinfo is not None else _v
if _v is not None:
self["hoverinfo"] = _v
_v = arg.pop("hoverinfosrc", None)
_v = hoverinfosrc if hoverinfosrc is not None else _v
if _v is not None:
self["hoverinfosrc"] = _v
_v = arg.pop("hoverlabel", None)
_v = hoverlabel if hoverlabel is not None else _v
if _v is not None:
self["hoverlabel"] = _v
_v = arg.pop("hovertemplate", None)
_v = hovertemplate if hovertemplate is not None else _v
if _v is not None:
self["hovertemplate"] = _v
_v = arg.pop("hovertemplatesrc", None)
_v = hovertemplatesrc if hovertemplatesrc is not None else _v
if _v is not None:
self["hovertemplatesrc"] = _v
_v = arg.pop("hovertext", None)
_v = hovertext if hovertext is not None else _v
if _v is not None:
self["hovertext"] = _v
_v = arg.pop("hovertextsrc", None)
_v = hovertextsrc if hovertextsrc is not None else _v
if _v is not None:
self["hovertextsrc"] = _v
_v = arg.pop("ids", None)
_v = ids if ids is not None else _v
if _v is not None:
self["ids"] = _v
_v = arg.pop("idssrc", None)
_v = idssrc if idssrc is not None else _v
if _v is not None:
self["idssrc"] = _v
_v = arg.pop("meta", None)
_v = meta if meta is not None else _v
if _v is not None:
self["meta"] = _v
_v = arg.pop("metasrc", None)
_v = metasrc if metasrc is not None else _v
if _v is not None:
self["metasrc"] = _v
_v = arg.pop("name", None)
_v = name if name is not None else _v
if _v is not None:
self["name"] = _v
_v = arg.pop("opacity", None)
_v = opacity if opacity is not None else _v
if _v is not None:
self["opacity"] = _v
_v = arg.pop("source", None)
_v = source if source is not None else _v
if _v is not None:
self["source"] = _v
_v = arg.pop("stream", None)
_v = stream if stream is not None else _v
if _v is not None:
self["stream"] = _v
_v = arg.pop("text", None)
_v = text if text is not None else _v
if _v is not None:
self["text"] = _v
_v = arg.pop("textsrc", None)
_v = textsrc if textsrc is not None else _v
if _v is not None:
self["textsrc"] = _v
_v = arg.pop("uid", None)
_v = uid if uid is not None else _v
if _v is not None:
self["uid"] = _v
_v = arg.pop("uirevision", None)
_v = uirevision if uirevision is not None else _v
if _v is not None:
self["uirevision"] = _v
_v = arg.pop("visible", None)
_v = visible if visible is not None else _v
if _v is not None:
self["visible"] = _v
_v = arg.pop("x0", None)
_v = x0 if x0 is not None else _v
if _v is not None:
self["x0"] = _v
_v = arg.pop("xaxis", None)
_v = xaxis if xaxis is not None else _v
if _v is not None:
self["xaxis"] = _v
_v = arg.pop("y0", None)
_v = y0 if y0 is not None else _v
if _v is not None:
self["y0"] = _v
_v = arg.pop("yaxis", None)
_v = yaxis if yaxis is not None else _v
if _v is not None:
self["yaxis"] = _v
_v = arg.pop("z", None)
_v = z if z is not None else _v
if _v is not None:
self["z"] = _v
_v = arg.pop("zmax", None)
_v = zmax if zmax is not None else _v
if _v is not None:
self["zmax"] = _v
_v = arg.pop("zmin", None)
_v = zmin if zmin is not None else _v
if _v is not None:
self["zmin"] = _v
_v = arg.pop("zsmooth", None)
_v = zsmooth if zsmooth is not None else _v
if _v is not None:
self["zsmooth"] = _v
_v = arg.pop("zsrc", None)
_v = zsrc if zsrc is not None else _v
if _v is not None:
self["zsrc"] = _v
# Read-only literals
# ------------------
self._props["type"] = "image"
arg.pop("type", None)
# Process unknown kwargs
# ----------------------
self._process_kwargs(**dict(arg, **kwargs))
# Reset skip_invalid
# ------------------
self._skip_invalid = False
| 33.038874 | 98 | 0.551365 | 6,038 | 49,294 | 4.459921 | 0.079 | 0.013183 | 0.022726 | 0.016414 | 0.7294 | 0.717071 | 0.710758 | 0.708604 | 0.701994 | 0.691114 | 0 | 0.012487 | 0.359882 | 49,294 | 1,491 | 99 | 33.061033 | 0.840939 | 0.452266 | 0 | 0.127419 | 0 | 0.001613 | 0.444725 | 0.013292 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114516 | false | 0 | 0.003226 | 0.003226 | 0.183871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
54ef94ee07bd7ed468f1f20cd0a8784070138fe9 | 35 | py | Python | main.py | istommao/toolbox-api | 1d7186d9668bcd396346d4154b7ff6ae00b0f59b | [
"MIT"
] | null | null | null | main.py | istommao/toolbox-api | 1d7186d9668bcd396346d4154b7ff6ae00b0f59b | [
"MIT"
] | null | null | null | main.py | istommao/toolbox-api | 1d7186d9668bcd396346d4154b7ff6ae00b0f59b | [
"MIT"
] | null | null | null | from src.app import APP
app = APP
| 8.75 | 23 | 0.714286 | 7 | 35 | 3.571429 | 0.571429 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228571 | 35 | 3 | 24 | 11.666667 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
54f6853a0e415fbedcf4255398607e5a767d1482 | 244 | py | Python | vendor-local/lib/python/celery/loaders/app.py | Mozilla-GitHub-Standards/6f0d85288b5b0ef8beecb60345173dc14c98e40f48e1307a444ab1e08231e695 | bf6a382913901ad193d907f022086931df0de8c4 | [
"BSD-3-Clause"
] | 1 | 2015-07-13T03:29:04.000Z | 2015-07-13T03:29:04.000Z | vendor-local/lib/python/celery/loaders/app.py | Mozilla-GitHub-Standards/6f0d85288b5b0ef8beecb60345173dc14c98e40f48e1307a444ab1e08231e695 | bf6a382913901ad193d907f022086931df0de8c4 | [
"BSD-3-Clause"
] | 2 | 2015-03-03T23:02:19.000Z | 2019-03-30T04:45:51.000Z | vendor-local/lib/python/celery/loaders/app.py | Mozilla-GitHub-Standards/6f0d85288b5b0ef8beecb60345173dc14c98e40f48e1307a444ab1e08231e695 | bf6a382913901ad193d907f022086931df0de8c4 | [
"BSD-3-Clause"
] | 2 | 2016-04-15T11:43:05.000Z | 2016-04-15T11:43:15.000Z | # -*- coding: utf-8 -*-
"""
celery.loaders.app
~~~~~~~~~~~~~~~~~~
The default loader used with custom app instances.
"""
from __future__ import absolute_import
from .base import BaseLoader
class AppLoader(BaseLoader):
pass
| 15.25 | 54 | 0.635246 | 27 | 244 | 5.555556 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005128 | 0.20082 | 244 | 15 | 55 | 16.266667 | 0.764103 | 0.459016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
54f6abc955f7e6fdf7284f3837cf0284e2f71d17 | 20 | py | Python | test/tokenize/t23.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | test/tokenize/t23.py | csev/skulpt | 9aa25b7dbf29f23ee8d3140d01a6f4353d12e66f | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | test/tokenize/t23.py | csev/skulpt | 9aa25b7dbf29f23ee8d3140d01a6f4353d12e66f | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | x = u'abc' + U'ABC'
| 10 | 19 | 0.45 | 5 | 20 | 1.8 | 0.6 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 20 | 1 | 20 | 20 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0712314f22dc1dedf01c65d39dbc722b19510a17 | 190 | py | Python | pywiktionary/parsers/__init__.py | alessandrome/pywiktionary | b9378ca1e2dfe704eaa8a044bd82519b12f81226 | [
"MIT"
] | 4 | 2019-08-08T21:15:01.000Z | 2021-01-14T01:32:18.000Z | pywiktionary/parsers/__init__.py | alessandrome/pywiktionary | b9378ca1e2dfe704eaa8a044bd82519b12f81226 | [
"MIT"
] | 1 | 2021-09-02T17:24:12.000Z | 2021-09-02T17:24:12.000Z | pywiktionary/parsers/__init__.py | alessandrome/pywiktionary | b9378ca1e2dfe704eaa8a044bd82519b12f81226 | [
"MIT"
] | 1 | 2020-03-19T12:57:45.000Z | 2020-03-19T12:57:45.000Z | from .basic_parser import BasicParser
from .english_parser import EnglishParser, SECTION_ID as ENGLISH_SECTION_ID
from .italian_parser import ItalianParser, SECTION_ID as ITALIAN_SECTION_ID
| 47.5 | 75 | 0.878947 | 27 | 190 | 5.851852 | 0.444444 | 0.227848 | 0.139241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094737 | 190 | 3 | 76 | 63.333333 | 0.918605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4acc308f7ced794ea469367cbc389cd63c66dbe5 | 192 | py | Python | backend/api/__init__.py | XoriensLair/XoriensLair.github.io | 61675ba296ee747a2a0bd729ec50becb6c903a18 | [
"MIT"
] | null | null | null | backend/api/__init__.py | XoriensLair/XoriensLair.github.io | 61675ba296ee747a2a0bd729ec50becb6c903a18 | [
"MIT"
] | null | null | null | backend/api/__init__.py | XoriensLair/XoriensLair.github.io | 61675ba296ee747a2a0bd729ec50becb6c903a18 | [
"MIT"
] | null | null | null | from api.accessapi import get
from api.dndutil import *
from api.pycritter import APIError, api_get_bestiary, api_get_creature
from api.pylink import PyLink
from api.character import Character | 38.4 | 70 | 0.848958 | 30 | 192 | 5.3 | 0.4 | 0.220126 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109375 | 192 | 5 | 71 | 38.4 | 0.929825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4ad6da871c1344379b1a9116ad308b49431dbbe0 | 84 | py | Python | aiproteomics/frag/models/__init__.py | ai-proteomics/aiproteomics | 125aed4b3528bfd40349ef932034d9532ab969c3 | [
"Apache-2.0"
] | null | null | null | aiproteomics/frag/models/__init__.py | ai-proteomics/aiproteomics | 125aed4b3528bfd40349ef932034d9532ab969c3 | [
"Apache-2.0"
] | 14 | 2022-03-30T19:49:30.000Z | 2022-03-31T11:39:27.000Z | aiproteomics/frag/models/__init__.py | ai-proteomics/aiproteomics | 125aed4b3528bfd40349ef932034d9532ab969c3 | [
"Apache-2.0"
] | null | null | null | from .transformer_frag import *
from . import prosit1
from .prosit1_model import *
| 16.8 | 31 | 0.785714 | 11 | 84 | 5.818182 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028169 | 0.154762 | 84 | 4 | 32 | 21 | 0.873239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4ae4eb58cea9a7732bfd549ff5497c462452cf33 | 87 | py | Python | quotetools/__init__.py | entchen66/sinbad3.1 | 3353118b8693c84d5572ab2a7a2278a32be2a76c | [
"MIT"
] | null | null | null | quotetools/__init__.py | entchen66/sinbad3.1 | 3353118b8693c84d5572ab2a7a2278a32be2a76c | [
"MIT"
] | null | null | null | quotetools/__init__.py | entchen66/sinbad3.1 | 3353118b8693c84d5572ab2a7a2278a32be2a76c | [
"MIT"
] | 1 | 2020-02-29T10:57:21.000Z | 2020-02-29T10:57:21.000Z | from . import quotetools
def setup(bot):
bot.add_cog(quotetools.QuoteTools(bot))
| 14.5 | 43 | 0.735632 | 12 | 87 | 5.25 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149425 | 87 | 5 | 44 | 17.4 | 0.851351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4af4d357f0c14ab60bb4e2f65dd8f512e91de2ac | 1,677 | py | Python | Decoder.py | eym55/mango-client-python | 2cb1ce77d785343c24ecba913eaa9693c3db1181 | [
"MIT"
] | null | null | null | Decoder.py | eym55/mango-client-python | 2cb1ce77d785343c24ecba913eaa9693c3db1181 | [
"MIT"
] | null | null | null | Decoder.py | eym55/mango-client-python | 2cb1ce77d785343c24ecba913eaa9693c3db1181 | [
"MIT"
] | null | null | null | import base64
import base58
import logging
import typing
from solana.publickey import PublicKey
def decode_binary(encoded: typing.List) -> bytes:
if isinstance(encoded, str):
return base58.b58decode(encoded)
elif encoded[1] == "base64":
return base64.b64decode(encoded[0])
else:
return base58.b58decode(encoded[0])
def encode_binary(decoded: bytes) -> typing.List:
return [base64.b64encode(decoded), "base64"]
def encode_key(key: PublicKey) -> str:
return str(key)
if __name__ == "__main__":
logging.getLogger().setLevel(logging.INFO)
data = decode_binary(['AwAAAAAAAACCaOmpoURMK6XHelGTaFawcuQ/78/15LAemWI8jrt3SRKLy2R9i60eclDjuDS8+p/ZhvTUd9G7uQVOYCsR6+BhmqGCiO6EPYP2PQkf/VRTvw7JjXvIjPFJy06QR1Cq1WfTonHl0OjCkyEf60SD07+MFJu5pVWNFGGEO/8AiAYfduaKdnFTaZEHPcK5Eq72WWHeHg2yIbBF09kyeOhlCJwOoG8O5SgpPV8QOA64ZNV4aKroFfADg6kEy/wWCdp3fv2B8WJgAAAAANVfH3HGtjwAAQAAAAAAAADr8cwFi9UOAAEAAAAAAAAAgfFiYAAAAABo3Dbz0L0oAAEAAAAAAAAAr8K+TvCjCwABAAAAAAAAAIHxYmAAAAAA49t5tVNZhwABAAAAAAAAAAmPtcB1zC8AAQAAAAAAAABIBGiCcyaEZdNhrTyeqUY692vOzzPdHaxAxguht3JQGlkzjtd05dX9LENHkl2z1XvUbTNKZlweypNRetmH0lmQ9VYQAHqylxZVK65gEg85g27YuSyvOBZAjJyRmYU9KdCO1D+4ehdPu9dQB1yI1uh75wShdAaFn2o4qrMYwq3SQQEAAAAAAAAAAiH1PPJKAuh6oGiE35aGhUQhFi/bxgKOudpFv8HEHNCFDy1uAqR6+CTQmradxC1wyyjL+iSft+5XudJWwSdi72NJGmyK96x7Obj/AgAAAAB8RjOEdJow6r9LMhIAAAAAGkNK4CXHh5M2st7PnwAAAE33lx1h8hPFD04AAAAAAAA8YRV3Oa309B2wGwAAAAAAOIlOLmkr6+r605n+AQAAAACgmZmZmZkZAQAAAAAAAAAAMDMzMzMzMwEAAAAAAAAA25D1XcAtRzSuuyx3U+X7aE9vM1EJySU9KprgL0LMJ/vat9+SEEUZuga7O5tTUrcMDYWDg+LYaAWhSQiN2fYk7aCGAQAAAAAAgIQeAAAAAAAA8gUqAQAAAAYGBgICAAAA', 'base64'])
print(f"Data length (should be 744): {len(data)}") | 62.111111 | 1,032 | 0.860465 | 108 | 1,677 | 13.25 | 0.592593 | 0.016771 | 0.02935 | 0.039133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108752 | 0.073345 | 1,677 | 27 | 1,033 | 62.111111 | 0.812098 | 0 | 0 | 0 | 0 | 0.05 | 0.630513 | 0.59118 | 0 | 1 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.25 | 0.1 | 0.65 | 0.05 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ab1a13db2a9437b692b17ebed60791632b4f6f96 | 18,784 | py | Python | local_epifx/tests/test_seir_forecast.py | ruarai/epifx.covid | be7aecbf9e86c3402f6851ea65f6705cdb59f3cf | [
"BSD-3-Clause"
] | null | null | null | local_epifx/tests/test_seir_forecast.py | ruarai/epifx.covid | be7aecbf9e86c3402f6851ea65f6705cdb59f3cf | [
"BSD-3-Clause"
] | null | null | null | local_epifx/tests/test_seir_forecast.py | ruarai/epifx.covid | be7aecbf9e86c3402f6851ea65f6705cdb59f3cf | [
"BSD-3-Clause"
] | null | null | null | """Test cases for the SEIR forecasting example."""
import datetime
import epifx.cmd.decl_fs as fs
import logging
import numpy as np
import os
import pkgutil
import pypfilt.config
import pypfilt.sweep
def two_forecast_dates(all_obs, fs_from):
"""Select only two forecasting dates, to reduce computation time."""
first_obs = min(obs['date'] for obs in all_obs
if obs['date'] >= fs_from)
twelve_weeks_later = first_obs + datetime.timedelta(days=12 * 7)
return [first_obs, twelve_weeks_later]
def two_forecast_times(all_obs, fs_from):
"""Select only two forecasting times, to reduce computation time."""
first_obs = min(obs['date'] for obs in all_obs
if obs['date'] >= fs_from)
twelve_weeks_later = first_obs + 12 * 7
return [first_obs, twelve_weeks_later]
def simulate_seir_observations():
"""Generate synthetic observations from a known model."""
toml_file = 'seir.toml'
pr_file = 'pr-obs.ssv'
obs_file_dt = 'simulated-weekly-cases-datetime.ssv'
obs_file_sc = 'simulated-weekly-cases-scalar.ssv'
toml_data = pkgutil.get_data('epifx.example.seir', toml_file).decode()
config = pypfilt.config.from_string(toml_data)
pr_data = pkgutil.get_data('epifx.example.seir', pr_file).decode()
with open(pr_file, mode='w') as f:
f.write(pr_data)
forecasts = list(pypfilt.sweep.forecasts(config, load_obs=False))
assert len(forecasts) == 1
forecast = forecasts[0]
# NOTE: define the fixed ground truth for the model simulation.
params = forecast.params
params['model']['prior'] = {
'R0': lambda r, size=None: r.uniform(low=1.45, high=1.45,
size=size),
'sigma': lambda r, size=None: r.uniform(low=0.25, high=0.25,
size=size),
'gamma': lambda r, size=None: r.uniform(low=0.25, high=0.25,
size=size),
'eta': lambda r, size=None: r.uniform(low=1.0, high=1.0,
size=size),
'alpha': lambda r, size=None: r.uniform(low=0.0, high=0.0,
size=size),
't0': lambda r, size=None: r.uniform(low=14.0, high=14.0,
size=size),
}
obs_table = pypfilt.simulate_from_model(params, px_count=1)
# Extract weekly observations from the simulated data.
def to_date(bs):
return datetime.datetime.strptime(bs.decode(),
'%Y-%m-%d %H:%M:%S').date()
obs_list = [(to_date(row['date']), row['value'].astype(int))
for row in obs_table
if to_date(row['date']).isoweekday() == 7]
# Save date-indexed observations to disk.
dt_obs = [(row[0].strftime('%Y-%m-%d'), row[1])
for row in obs_list]
dt_obs = np.array(dt_obs, dtype=[('date', 'O'), ('value', np.int_)])
np.savetxt(obs_file_dt, dt_obs, fmt='%s %d',
header='date cases', comments='')
# Save day-indexed observations to disk.
sc_obs = [(int(row[0].strftime('%-j')), row[1])
for row in obs_list]
sc_obs = np.array(sc_obs, dtype=[('day', np.int_), ('value', np.int_)])
np.savetxt(obs_file_sc, sc_obs, fmt='%d %d',
header='day cases', comments='')
return (obs_list, obs_file_dt, obs_file_sc)
def test_simulate():
"""
Generate synthetic observations from a known model, and check that the
serialised results are consistent.
"""
(obs_list, obs_file_dt, obs_file_sc) = simulate_seir_observations()
peak_size = max(o[1] for o in obs_list)
peak_time = [o[0] for o in obs_list if o[1] == peak_size][0]
assert peak_size == 2678
assert peak_time == datetime.date(2014, 9, 14)
peak_day = int(peak_time.strftime('%-j'))
# Check that the date-indexed peak is consistent with the above results.
dt_cols = [pypfilt.io.date_column('date'), ('cases', int)]
dt_obs = pypfilt.io.read_table(obs_file_dt, dt_cols)
dt_mask = dt_obs['cases'] == peak_size
assert np.sum(dt_mask) == 1
assert dt_obs['date'][dt_mask].item().date() == peak_time
# Check that the day-indexed peak is consistent with the above results.
sc_cols = [('day', int), ('cases', int)]
sc_obs = pypfilt.io.read_table(obs_file_sc, sc_cols)
sc_mask = sc_obs['cases'] == peak_size
assert np.sum(sc_mask) == 1
assert sc_obs['day'][sc_mask] == peak_day
# Check that the observations are the same.
assert np.array_equal(sc_obs['cases'], dt_obs['cases'])
# Clean up: remove created files.
os.remove(obs_file_dt)
os.remove(obs_file_sc)
def test_seeiir_forecast():
"""
Use the SEEIIR forecasting example to compare peak size and time
predictions at two forecasting dates.
Note that the observation probability is set to 0.5 (much too high) and so
we should only obtain sensible forecasts if the observation model is able
to use the lookup table and obtain observation probabilities from the
``pr-obs.ssv`` data file.
"""
logging.basicConfig(level=logging.INFO)
toml_file = 'seeiir.toml'
obs_file = 'weekly-cases.ssv'
pr_file = 'pr-obs.ssv'
toml_data = pkgutil.get_data('epifx.example.seir', toml_file).decode()
config = pypfilt.config.from_string(toml_data)
obs_data = pkgutil.get_data('epifx.example.seir', obs_file).decode()
with open(obs_file, mode='w') as f:
f.write(obs_data)
pr_data = pkgutil.get_data('epifx.example.seir', pr_file).decode()
with open(pr_file, mode='w') as f:
f.write(pr_data)
forecast_from = datetime.datetime(2014, 4, 1)
# Check that there is only one set of forecasts (i.e., only one location
# and only one set of observation model parameters).
forecasts = pypfilt.sweep.forecasts(config)
forecasts = list(forecasts)
assert len(forecasts) == 1
# Check that forecasts were run for two forecasting dates.
forecast = forecasts[0]
forecast_dates = two_forecast_dates(forecast.all_observations,
forecast_from)
state = fs.run(forecast, forecast_dates)
fs_dates = list(state.keys())
assert len(fs_dates) == 2
fs_date_n1 = fs_dates[0]
fs_date_n2 = fs_dates[1]
# Retrieve the list of observations
obs = state[fs_date_n1]['obs']
peak_size = max(o['value'] for o in obs)
peak_date = [o['date'] for o in obs if o['value'] == peak_size][0]
# Check that the peak size and date is as expected.
assert peak_size == 2678
assert peak_date == datetime.datetime(2014, 9, 14)
# Compare the forecast predictions to the observed peak size and date.
forecast_n1 = state[fs_date_n1][fs_date_n1]['summary']
forecast_n2 = state[fs_date_n2][fs_date_n2]['summary']
dt_format = '%Y-%m-%d %H:%M:%S'
# Ensure that all of the expected tables have been created, and that no
# other tables have been created.
expected_tables = {
'model_cints', 'param_covar', 'pr_epi', 'forecasts', 'obs_llhd',
'peak_size_acc', 'peak_time_acc', 'peak_cints', 'peak_ensemble',
'obs/cases', 'exceed_500', 'exceed_1000', 'expected_obs'}
tables_n1 = set(forecast_n1.keys())
tables_n2 = set(forecast_n2.keys())
assert tables_n1 == expected_tables
assert tables_n2 == expected_tables
# Ensure that no tables are empty.
for name in expected_tables:
shape_n1 = forecast_n1[name].shape
shape_n2 = forecast_n1[name].shape
assert len(shape_n1) == 1
assert len(shape_n2) == 1
assert shape_n1[0] > 0
assert shape_n2[0] > 0
# Ensure that the exceed_500 and exceed_1000 tables differ.
pr_exc_low_n1 = forecast_n1['exceed_500'][()]['prob']
pr_exc_high_n1 = forecast_n1['exceed_1000'][()]['prob']
assert pr_exc_low_n1.shape == pr_exc_high_n1.shape
assert not np.allclose(pr_exc_low_n1, pr_exc_high_n1)
# Ensure that the cumulative probability of exceeding 500 cases is greater
# than that of exceeding 1000 cases, until they both equal 1.0.
cum_pr_low_n1 = np.cumsum(pr_exc_low_n1)
cum_pr_high_n1 = np.cumsum(pr_exc_high_n1)
mask_lt_1 = np.logical_and(cum_pr_low_n1 < 1.0, cum_pr_high_n1 < 1.0)
mask_gt_0 = np.logical_or(cum_pr_low_n1 > 0.0, cum_pr_high_n1 > 0.0)
mask = np.logical_and(mask_lt_1, mask_gt_0)
assert np.all(cum_pr_low_n1[mask] > cum_pr_high_n1[mask])
# The earlier forecast should include the peak size and time in its 95%
# credible intervals.
cints_n1 = forecast_n1['peak_cints']
ci_n1 = cints_n1[cints_n1['prob'] == 95]
ci_n1_size_lower = ci_n1['sizemin'].item()
ci_n1_size_upper = ci_n1['sizemax'].item()
ci_n1_date_lower = datetime.datetime.strptime(
ci_n1['timemin'].item().decode(),
dt_format)
ci_n1_date_upper = datetime.datetime.strptime(
ci_n1['timemax'].item().decode(),
dt_format)
assert ci_n1_size_lower <= peak_size <= ci_n1_size_upper
assert ci_n1_date_lower <= peak_date <= ci_n1_date_upper
# The later forecast will have narrowed, and it should still include the
# peak size and time in its 95% credible intervals.
cints_n2 = forecast_n2['peak_cints']
ci_n2 = cints_n2[cints_n2['prob'] == 95]
ci_n2_size_lower = ci_n2['sizemin'].item()
ci_n2_size_upper = ci_n2['sizemax'].item()
ci_n2_date_lower = datetime.datetime.strptime(
ci_n2['timemin'].item().decode(),
dt_format)
ci_n2_date_upper = datetime.datetime.strptime(
ci_n2['timemax'].item().decode(),
dt_format)
assert ci_n2_size_lower <= peak_size <= ci_n2_size_upper
assert ci_n2_date_lower <= peak_date <= ci_n2_date_upper
# The later forecast should have more accurate predictions of peak size.
size_acc_n1 = forecast_n1['peak_size_acc']['acc']
size_acc_n2 = forecast_n2['peak_size_acc']['acc']
assert all(size_acc_n1 > 0.3)
assert any(size_acc_n1 < 0.7)
assert all(size_acc_n2 > 0.7)
# The later forecast should have more accurate predictions of peak time.
time_acc_n1 = forecast_n1['peak_time_acc']['acc']
time_acc_n2 = forecast_n2['peak_time_acc']['acc']
assert all(time_acc_n1 > 0.1)
assert any(time_acc_n1 < 0.3)
assert all(time_acc_n2 > 0.7)
# Clean up: remove created files.
os.remove(obs_file)
os.remove(pr_file)
os.remove(state[fs_date_n1]['forecast_file'])
os.remove(state[fs_date_n2]['forecast_file'])
def test_seir_forecast():
"""
Use the SEIR forecasting example to compare peak size and time predictions
at two forecasting dates.
Note that the observation probability is set to 0.5 (much too high) and so
we should only obtain sensible forecasts if the observation model is able
to use the lookup table and obtain observation probabilities from the
``pr-obs.ssv`` data file.
"""
logging.basicConfig(level=logging.INFO)
toml_file = 'seir.toml'
obs_file = 'weekly-cases.ssv'
pr_file = 'pr-obs.ssv'
toml_data = pkgutil.get_data('epifx.example.seir', toml_file).decode()
config = pypfilt.config.from_string(toml_data)
obs_data = pkgutil.get_data('epifx.example.seir', obs_file).decode()
with open(obs_file, mode='w') as f:
f.write(obs_data)
pr_data = pkgutil.get_data('epifx.example.seir', pr_file).decode()
with open(pr_file, mode='w') as f:
f.write(pr_data)
forecast_from = datetime.datetime(2014, 4, 1)
# Check that there is only one set of forecasts (i.e., only one location
# and only one set of observation model parameters).
forecasts = pypfilt.sweep.forecasts(config)
forecasts = list(forecasts)
assert len(forecasts) == 1
# Check that forecasts were run for two forecasting dates.
forecast = forecasts[0]
forecast_dates = two_forecast_dates(forecast.all_observations,
forecast_from)
state = fs.run(forecast, forecast_dates)
fs_dates = list(state.keys())
assert len(fs_dates) == 2
fs_date_n1 = fs_dates[0]
fs_date_n2 = fs_dates[1]
# Retrieve the list of observations
obs = state[fs_date_n1]['obs']
peak_size = max(o['value'] for o in obs)
peak_date = [o['date'] for o in obs if o['value'] == peak_size][0]
# Check that the peak size and date is as expected.
assert peak_size == 2678
assert peak_date == datetime.datetime(2014, 9, 14)
# Compare the forecast predictions to the observed peak size and date.
forecast_n1 = state[fs_date_n1][fs_date_n1]['summary']
forecast_n2 = state[fs_date_n2][fs_date_n2]['summary']
dt_format = '%Y-%m-%d %H:%M:%S'
# The earlier forecast should include the peak size and time in its 95%
# credible intervals.
cints_n1 = forecast_n1['peak_cints']
ci_n1 = cints_n1[cints_n1['prob'] == 95]
ci_n1_size_lower = ci_n1['sizemin'].item()
ci_n1_size_upper = ci_n1['sizemax'].item()
ci_n1_date_lower = datetime.datetime.strptime(
ci_n1['timemin'].item().decode(),
dt_format)
ci_n1_date_upper = datetime.datetime.strptime(
ci_n1['timemax'].item().decode(),
dt_format)
assert ci_n1_size_lower <= peak_size <= ci_n1_size_upper
assert ci_n1_date_lower <= peak_date <= ci_n1_date_upper
# The later forecast will have narrowed, and it should still include the
# peak size and time in its 95% credible intervals.
cints_n2 = forecast_n2['peak_cints']
ci_n2 = cints_n2[cints_n2['prob'] == 95]
ci_n2_size_lower = ci_n2['sizemin'].item()
ci_n2_size_upper = ci_n2['sizemax'].item()
ci_n2_date_lower = datetime.datetime.strptime(
ci_n2['timemin'].item().decode(),
dt_format)
ci_n2_date_upper = datetime.datetime.strptime(
ci_n2['timemax'].item().decode(),
dt_format)
assert ci_n2_size_lower <= peak_size <= ci_n2_size_upper
assert ci_n2_date_lower <= peak_date <= ci_n2_date_upper
# The later forecast should have more accurate predictions of peak size.
size_acc_n1 = forecast_n1['peak_size_acc']['acc']
size_acc_n2 = forecast_n2['peak_size_acc']['acc']
assert all(size_acc_n1 > 0.3)
assert any(size_acc_n1 < 0.7)
assert all(size_acc_n2 > 0.7)
# The later forecast should have more accurate predictions of peak time.
time_acc_n1 = forecast_n1['peak_time_acc']['acc']
time_acc_n2 = forecast_n2['peak_time_acc']['acc']
assert all(time_acc_n1 > 0.1)
assert any(time_acc_n1 < 0.3)
assert all(time_acc_n2 > 0.7)
# Clean up: remove created files.
os.remove(obs_file)
os.remove(pr_file)
os.remove(state[fs_date_n1]['forecast_file'])
os.remove(state[fs_date_n2]['forecast_file'])
def test_seeiir_scalar_forecast():
"""
Use the SEEIIR forecasting example to compare peak size and time
predictions at two forecasting dates.
Note that the observation probability is set to 0.5 (much too high) and so
we should only obtain sensible forecasts if the observation model is able
to use the lookup table and obtain observation probabilities from the
``pr-obs.ssv`` data file.
"""
logging.basicConfig(level=logging.INFO)
toml_file = 'seeiir_scalar.toml'
obs_file = 'weekly-cases-scalar.ssv'
pr_file = 'pr-obs-scalar.ssv'
toml_data = pkgutil.get_data('epifx.example.seir', toml_file).decode()
config = pypfilt.config.from_string(toml_data)
obs_data = pkgutil.get_data('epifx.example.seir', obs_file).decode()
with open(obs_file, mode='w') as f:
f.write(obs_data)
pr_data = pkgutil.get_data('epifx.example.seir', pr_file).decode()
with open(pr_file, mode='w') as f:
f.write(pr_data)
forecast_from = 91
# Check that there is only one set of forecasts (i.e., only one location
# and only one set of observation model parameters).
forecasts = pypfilt.sweep.forecasts(config)
forecasts = list(forecasts)
assert len(forecasts) == 1
# Check that forecasts were run for two forecasting dates.
forecast = forecasts[0]
forecast_dates = two_forecast_times(forecast.all_observations,
forecast_from)
state = fs.run(forecast, forecast_dates)
fs_dates = list(state.keys())
assert len(fs_dates) == 2
fs_date_n1 = fs_dates[0]
fs_date_n2 = fs_dates[1]
# Retrieve the list of observations
obs = state[fs_date_n1]['obs']
peak_size = max(o['value'] for o in obs)
peak_date = [o['date'] for o in obs if o['value'] == peak_size][0]
# Check that the peak size and date is as expected.
assert peak_size == 2678
assert peak_date == 257
# Compare the forecast predictions to the observed peak size and date.
forecast_n1 = state[fs_date_n1][fs_date_n1]['summary']
forecast_n2 = state[fs_date_n2][fs_date_n2]['summary']
# The earlier forecast should include the peak size and time in its 95%
# credible intervals.
cints_n1 = forecast_n1['peak_cints']
ci_n1 = cints_n1[cints_n1['prob'] == 95]
ci_n1_size_lower = ci_n1['sizemin'].item()
ci_n1_size_upper = ci_n1['sizemax'].item()
ci_n1_date_lower = ci_n1['timemin'].item()
ci_n1_date_upper = ci_n1['timemax'].item()
assert ci_n1_size_lower <= peak_size <= ci_n1_size_upper
assert ci_n1_date_lower <= peak_date <= ci_n1_date_upper
# The later forecast will have narrowed, and it should still include the
# peak size and time in its 95% credible intervals.
cints_n2 = forecast_n2['peak_cints']
ci_n2 = cints_n2[cints_n2['prob'] == 95]
ci_n2_size_lower = ci_n2['sizemin'].item()
ci_n2_size_upper = ci_n2['sizemax'].item()
ci_n2_date_lower = ci_n2['timemin'].item()
ci_n2_date_upper = ci_n2['timemax'].item()
assert ci_n2_size_lower <= peak_size <= ci_n2_size_upper
assert ci_n2_date_lower <= peak_date <= ci_n2_date_upper
# The later forecast should have more accurate predictions of peak size.
size_acc_n1 = forecast_n1['peak_size_acc']['acc']
size_acc_n2 = forecast_n2['peak_size_acc']['acc']
assert all(size_acc_n1 > 0.3)
assert any(size_acc_n1 < 0.7)
assert all(size_acc_n2 > 0.7)
# The later forecast should have more accurate predictions of peak time.
time_acc_n1 = forecast_n1['peak_time_acc']['acc']
time_acc_n2 = forecast_n2['peak_time_acc']['acc']
assert all(time_acc_n1 > 0.1)
assert any(time_acc_n1 < 0.3)
assert all(time_acc_n2 > 0.7)
# Clean up: remove created files.
os.remove(obs_file)
os.remove(pr_file)
os.remove(state[fs_date_n1]['forecast_file'])
os.remove(state[fs_date_n2]['forecast_file'])
| 39.133333 | 78 | 0.665673 | 2,893 | 18,784 | 4.070515 | 0.091946 | 0.030571 | 0.014012 | 0.016814 | 0.778363 | 0.76537 | 0.762483 | 0.735819 | 0.711447 | 0.701936 | 0 | 0.030596 | 0.21875 | 18,784 | 479 | 79 | 39.215031 | 0.771857 | 0.226895 | 0 | 0.683706 | 0 | 0 | 0.099058 | 0.006353 | 0 | 0 | 0 | 0 | 0.188498 | 1 | 0.025559 | false | 0 | 0.025559 | 0.003195 | 0.063898 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ab615923f1a3ef1a3d29045702c634df1497f66d | 34,607 | py | Python | oscar_support/migrations/0002_auto__chg_field_message_type.py | snowball-one/django-oscar-support | 57d82200f0905e17df683652327e9102b7b34129 | [
"BSD-3-Clause"
] | 14 | 2015-01-10T05:06:33.000Z | 2021-02-08T03:37:32.000Z | oscar_support/migrations/0002_auto__chg_field_message_type.py | snowball-one/django-oscar-support | 57d82200f0905e17df683652327e9102b7b34129 | [
"BSD-3-Clause"
] | 2 | 2017-08-25T20:14:41.000Z | 2019-02-25T22:08:09.000Z | oscar_support/migrations/0002_auto__chg_field_message_type.py | snowball-one/django-oscar-support | 57d82200f0905e17df683652327e9102b7b34129 | [
"BSD-3-Clause"
] | 8 | 2015-07-29T21:39:06.000Z | 2018-12-06T04:14:56.000Z | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
from oscar.core.compat import AUTH_USER_MODEL, AUTH_USER_MODEL_NAME
class Migration(SchemaMigration):
def forwards(self, orm):
# Changing field 'Message.type'
db.alter_column(u'oscar_support_message', 'type', self.gf('django.db.models.fields.CharField')(max_length=30))
def backwards(self, orm):
# Changing field 'Message.type'
db.alter_column(u'oscar_support_message', 'type', self.gf('django.db.models.fields.CharField')(max_length=3))
models = {
u'address.country': {
'Meta': {'ordering': "('-display_order', 'name')", 'object_name': 'Country'},
'display_order': ('django.db.models.fields.PositiveSmallIntegerField', [], {'default': '0', 'db_index': 'True'}),
'is_shipping_country': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
'iso_3166_1_a2': ('django.db.models.fields.CharField', [], {'max_length': '2', 'primary_key': 'True'}),
'iso_3166_1_a3': ('django.db.models.fields.CharField', [], {'max_length': '3', 'null': 'True', 'db_index': 'True'}),
'iso_3166_1_numeric': ('django.db.models.fields.PositiveSmallIntegerField', [], {'null': 'True', 'db_index': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'printable_name': ('django.db.models.fields.CharField', [], {'max_length': '128'})
},
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
AUTH_USER_MODEL: {
'Meta': {'object_name': AUTH_USER_MODEL_NAME},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'catalogue.attributeentity': {
'Meta': {'object_name': 'AttributeEntity'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255', 'blank': 'True'}),
'type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'entities'", 'to': u"orm['catalogue.AttributeEntityType']"})
},
u'catalogue.attributeentitytype': {
'Meta': {'object_name': 'AttributeEntityType'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255', 'blank': 'True'})
},
u'catalogue.attributeoption': {
'Meta': {'object_name': 'AttributeOption'},
'group': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'options'", 'to': u"orm['catalogue.AttributeOptionGroup']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'option': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
u'catalogue.attributeoptiongroup': {
'Meta': {'object_name': 'AttributeOptionGroup'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '128'})
},
u'catalogue.category': {
'Meta': {'ordering': "['full_name']", 'object_name': 'Category'},
'depth': ('django.db.models.fields.PositiveIntegerField', [], {}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'full_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'image': ('django.db.models.fields.files.ImageField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'db_index': 'True'}),
'numchild': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'path': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255'})
},
u'catalogue.option': {
'Meta': {'object_name': 'Option'},
'code': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '128'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'type': ('django.db.models.fields.CharField', [], {'default': "'Required'", 'max_length': '128'})
},
u'catalogue.product': {
'Meta': {'ordering': "['-date_created']", 'object_name': 'Product'},
'attributes': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['catalogue.ProductAttribute']", 'through': u"orm['catalogue.ProductAttributeValue']", 'symmetrical': 'False'}),
'categories': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['catalogue.Category']", 'through': u"orm['catalogue.ProductCategory']", 'symmetrical': 'False'}),
'date_created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'db_index': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_discountable': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'parent': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'variants'", 'null': 'True', 'to': u"orm['catalogue.Product']"}),
'product_class': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'products'", 'null': 'True', 'to': u"orm['catalogue.ProductClass']"}),
'product_options': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['catalogue.Option']", 'symmetrical': 'False', 'blank': 'True'}),
'rating': ('django.db.models.fields.FloatField', [], {'null': 'True'}),
'recommended_products': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['catalogue.Product']", 'symmetrical': 'False', 'through': u"orm['catalogue.ProductRecommendation']", 'blank': 'True'}),
'related_products': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'relations'", 'blank': 'True', 'to': u"orm['catalogue.Product']"}),
'score': ('django.db.models.fields.FloatField', [], {'default': '0.0', 'db_index': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255'}),
'status': ('django.db.models.fields.CharField', [], {'db_index': 'True', 'max_length': '128', 'null': 'True', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'upc': ('django.db.models.fields.CharField', [], {'max_length': '64', 'unique': 'True', 'null': 'True', 'blank': 'True'})
},
u'catalogue.productattribute': {
'Meta': {'ordering': "['code']", 'object_name': 'ProductAttribute'},
'code': ('django.db.models.fields.SlugField', [], {'max_length': '128'}),
'entity_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['catalogue.AttributeEntityType']", 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'option_group': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['catalogue.AttributeOptionGroup']", 'null': 'True', 'blank': 'True'}),
'product_class': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'attributes'", 'null': 'True', 'to': u"orm['catalogue.ProductClass']"}),
'required': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'type': ('django.db.models.fields.CharField', [], {'default': "'text'", 'max_length': '20'})
},
u'catalogue.productattributevalue': {
'Meta': {'object_name': 'ProductAttributeValue'},
'attribute': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['catalogue.ProductAttribute']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'attribute_values'", 'to': u"orm['catalogue.Product']"}),
'value_boolean': ('django.db.models.fields.NullBooleanField', [], {'null': 'True', 'blank': 'True'}),
'value_date': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
'value_entity': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['catalogue.AttributeEntity']", 'null': 'True', 'blank': 'True'}),
'value_file': ('django.db.models.fields.files.FileField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'value_float': ('django.db.models.fields.FloatField', [], {'null': 'True', 'blank': 'True'}),
'value_image': ('django.db.models.fields.files.ImageField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'value_integer': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'value_option': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['catalogue.AttributeOption']", 'null': 'True', 'blank': 'True'}),
'value_richtext': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'value_text': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'})
},
u'catalogue.productcategory': {
'Meta': {'ordering': "['-is_canonical']", 'object_name': 'ProductCategory'},
'category': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['catalogue.Category']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_canonical': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'db_index': 'True'}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['catalogue.Product']"})
},
u'catalogue.productclass': {
'Meta': {'ordering': "['name']", 'object_name': 'ProductClass'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'options': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['catalogue.Option']", 'symmetrical': 'False', 'blank': 'True'}),
'requires_shipping': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '128'}),
'track_stock': ('django.db.models.fields.BooleanField', [], {'default': 'True'})
},
u'catalogue.productrecommendation': {
'Meta': {'object_name': 'ProductRecommendation'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'primary': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'primary_recommendations'", 'to': u"orm['catalogue.Product']"}),
'ranking': ('django.db.models.fields.PositiveSmallIntegerField', [], {'default': '0'}),
'recommendation': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['catalogue.Product']"})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'order.billingaddress': {
'Meta': {'object_name': 'BillingAddress'},
'country': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['address.Country']"}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'line1': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'line2': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'line3': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'line4': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'postcode': ('oscar.models.fields.UppercaseCharField', [], {'max_length': '64', 'null': 'True', 'blank': 'True'}),
'search_text': ('django.db.models.fields.CharField', [], {'max_length': '1000'}),
'state': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '64', 'null': 'True', 'blank': 'True'})
},
u'order.line': {
'Meta': {'object_name': 'Line'},
'est_dispatch_date': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'line_price_before_discounts_excl_tax': ('django.db.models.fields.DecimalField', [], {'max_digits': '12', 'decimal_places': '2'}),
'line_price_before_discounts_incl_tax': ('django.db.models.fields.DecimalField', [], {'max_digits': '12', 'decimal_places': '2'}),
'line_price_excl_tax': ('django.db.models.fields.DecimalField', [], {'max_digits': '12', 'decimal_places': '2'}),
'line_price_incl_tax': ('django.db.models.fields.DecimalField', [], {'max_digits': '12', 'decimal_places': '2'}),
'order': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'lines'", 'to': u"orm['order.Order']"}),
'partner': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'order_lines'", 'null': 'True', 'on_delete': 'models.SET_NULL', 'to': u"orm['partner.Partner']"}),
'partner_line_notes': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'partner_line_reference': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True', 'blank': 'True'}),
'partner_name': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'partner_sku': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['catalogue.Product']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
'quantity': ('django.db.models.fields.PositiveIntegerField', [], {'default': '1'}),
'status': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'stockrecord': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['partner.StockRecord']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'unit_cost_price': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '12', 'decimal_places': '2', 'blank': 'True'}),
'unit_price_excl_tax': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '12', 'decimal_places': '2', 'blank': 'True'}),
'unit_price_incl_tax': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '12', 'decimal_places': '2', 'blank': 'True'}),
'unit_retail_price': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '12', 'decimal_places': '2', 'blank': 'True'}),
'upc': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True', 'blank': 'True'})
},
u'order.order': {
'Meta': {'ordering': "['-date_placed']", 'object_name': 'Order'},
'basket_id': ('django.db.models.fields.PositiveIntegerField', [], {'null': 'True', 'blank': 'True'}),
'billing_address': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['order.BillingAddress']", 'null': 'True', 'blank': 'True'}),
'currency': ('django.db.models.fields.CharField', [], {'default': "'GBP'", 'max_length': '12'}),
'date_placed': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'db_index': 'True', 'blank': 'True'}),
'guest_email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'number': ('django.db.models.fields.CharField', [], {'max_length': '128', 'db_index': 'True'}),
'shipping_address': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['order.ShippingAddress']", 'null': 'True', 'blank': 'True'}),
'shipping_code': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '128', 'blank': 'True'}),
'shipping_excl_tax': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '12', 'decimal_places': '2'}),
'shipping_incl_tax': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '12', 'decimal_places': '2'}),
'shipping_method': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True', 'blank': 'True'}),
'site': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['sites.Site']"}),
'status': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True', 'blank': 'True'}),
'total_excl_tax': ('django.db.models.fields.DecimalField', [], {'max_digits': '12', 'decimal_places': '2'}),
'total_incl_tax': ('django.db.models.fields.DecimalField', [], {'max_digits': '12', 'decimal_places': '2'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'orders'", 'null': 'True', 'to': u"orm['{}']".format(AUTH_USER_MODEL)})
},
u'order.shippingaddress': {
'Meta': {'object_name': 'ShippingAddress'},
'country': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['address.Country']"}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'line1': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'line2': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'line3': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'line4': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'notes': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'phone_number': ('oscar.models.fields.PhoneNumberField', [], {'max_length': '128', 'blank': 'True'}),
'postcode': ('oscar.models.fields.UppercaseCharField', [], {'max_length': '64', 'null': 'True', 'blank': 'True'}),
'search_text': ('django.db.models.fields.CharField', [], {'max_length': '1000'}),
'state': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '64', 'null': 'True', 'blank': 'True'})
},
u'oscar_support.attachment': {
'Meta': {'object_name': 'Attachment'},
'date_created': ('django.db.models.fields.DateTimeField', [], {}),
'date_updated': ('django.db.models.fields.DateTimeField', [], {}),
'file': ('django.db.models.fields.files.FileField', [], {'max_length': '100'}),
'ticket': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'attachments'", 'to': u"orm['oscar_support.Ticket']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'attachments'", 'to': u"orm['{}']".format(AUTH_USER_MODEL)}),
'uuid': ('shortuuidfield.fields.ShortUUIDField', [], {'max_length': '22', 'primary_key': 'True'})
},
u'oscar_support.message': {
'Meta': {'ordering': "['-date_created']", 'object_name': 'Message'},
'date_created': ('django.db.models.fields.DateTimeField', [], {}),
'date_updated': ('django.db.models.fields.DateTimeField', [], {}),
'text': ('django.db.models.fields.TextField', [], {}),
'ticket': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'messages'", 'to': u"orm['oscar_support.Ticket']"}),
'type': ('django.db.models.fields.CharField', [], {'default': "u'public'", 'max_length': '30'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'messages'", 'to': u"orm['{}']".format(AUTH_USER_MODEL)}),
'uuid': ('shortuuidfield.fields.ShortUUIDField', [], {'max_length': '22', 'primary_key': 'True'})
},
u'oscar_support.priority': {
'Meta': {'object_name': 'Priority'},
'comment': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'slug': ('django_extensions.db.fields.AutoSlugField', [], {'allow_duplicates': 'False', 'max_length': '50', 'separator': "u'-'", 'blank': 'True', 'populate_from': "'name'", 'overwrite': 'False'}),
'uuid': ('shortuuidfield.fields.ShortUUIDField', [], {'max_length': '22', 'primary_key': 'True'})
},
u'oscar_support.relatedorder': {
'Meta': {'object_name': 'RelatedOrder'},
'date_created': ('django.db.models.fields.DateTimeField', [], {}),
'date_updated': ('django.db.models.fields.DateTimeField', [], {}),
'order': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'ticket_related_orders'", 'to': u"orm['order.Order']"}),
'ticket': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'relatedorders'", 'to': u"orm['oscar_support.Ticket']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'relatedorders'", 'to': u"orm['{}']".format(AUTH_USER_MODEL)}),
'uuid': ('shortuuidfield.fields.ShortUUIDField', [], {'max_length': '22', 'primary_key': 'True'})
},
u'oscar_support.relatedorderline': {
'Meta': {'object_name': 'RelatedOrderLine'},
'date_created': ('django.db.models.fields.DateTimeField', [], {}),
'date_updated': ('django.db.models.fields.DateTimeField', [], {}),
'line': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'ticket_related_order_lines'", 'to': u"orm['order.Line']"}),
'ticket': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'relatedorderlines'", 'to': u"orm['oscar_support.Ticket']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'relatedorderlines'", 'to': u"orm['{}']".format(AUTH_USER_MODEL)}),
'uuid': ('shortuuidfield.fields.ShortUUIDField', [], {'max_length': '22', 'primary_key': 'True'})
},
u'oscar_support.relatedproduct': {
'Meta': {'object_name': 'RelatedProduct'},
'date_created': ('django.db.models.fields.DateTimeField', [], {}),
'date_updated': ('django.db.models.fields.DateTimeField', [], {}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'ticket_related_products'", 'to': u"orm['catalogue.Product']"}),
'ticket': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'relatedproducts'", 'to': u"orm['oscar_support.Ticket']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'relatedproducts'", 'to': u"orm['{}']".format(AUTH_USER_MODEL)}),
'uuid': ('shortuuidfield.fields.ShortUUIDField', [], {'max_length': '22', 'primary_key': 'True'})
},
u'oscar_support.ticket': {
'Meta': {'ordering': "['-date_updated']", 'unique_together': "(('number', 'subticket_id'),)", 'object_name': 'Ticket'},
'assigned_group': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'tickets'", 'null': 'True', 'to': u"orm['auth.Group']"}),
'assignee': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'assigned_tickets'", 'null': 'True', 'to': u"orm['{}']".format(AUTH_USER_MODEL)}),
'body': ('django.db.models.fields.TextField', [], {}),
'date_created': ('django.db.models.fields.DateTimeField', [], {}),
'date_updated': ('django.db.models.fields.DateTimeField', [], {}),
'is_internal': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'number': ('django.db.models.fields.CharField', [], {'max_length': '64', 'db_index': 'True'}),
'parent': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'subtickets'", 'null': 'True', 'to': u"orm['oscar_support.Ticket']"}),
'priority': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'tickets'", 'null': 'True', 'to': u"orm['oscar_support.Priority']"}),
'related_lines': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'tickets'", 'blank': 'True', 'through': u"orm['oscar_support.RelatedOrderLine']", 'to': u"orm['order.Line']"}),
'related_orders': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'tickets'", 'blank': 'True', 'through': u"orm['oscar_support.RelatedOrder']", 'to': u"orm['order.Order']"}),
'related_products': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'tickets'", 'blank': 'True', 'through': u"orm['oscar_support.RelatedProduct']", 'to': u"orm['catalogue.Product']"}),
'requester': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'submitted_tickets'", 'to': u"orm['{}']".format(AUTH_USER_MODEL)}),
'status': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'tickets'", 'to': u"orm['oscar_support.TicketStatus']"}),
'subject': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'subticket_id': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'tickets'", 'to': u"orm['oscar_support.TicketType']"}),
'uuid': ('shortuuidfield.fields.ShortUUIDField', [], {'max_length': '22', 'primary_key': 'True'})
},
u'oscar_support.ticketstatus': {
'Meta': {'object_name': 'TicketStatus'},
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '64'}),
'uuid': ('shortuuidfield.fields.ShortUUIDField', [], {'max_length': '22', 'primary_key': 'True'})
},
u'oscar_support.tickettype': {
'Meta': {'object_name': 'TicketType'},
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '64'}),
'uuid': ('shortuuidfield.fields.ShortUUIDField', [], {'max_length': '22', 'primary_key': 'True'})
},
u'partner.partner': {
'Meta': {'object_name': 'Partner'},
'code': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '128'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True', 'blank': 'True'}),
'users': ('django.db.models.fields.related.ManyToManyField', [], {'blank': 'True', 'related_name': "'partners'", 'null': 'True', 'symmetrical': 'False', 'to': u"orm['{}']".format(AUTH_USER_MODEL)})
},
u'partner.stockrecord': {
'Meta': {'unique_together': "(('partner', 'partner_sku'),)", 'object_name': 'StockRecord'},
'cost_price': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '12', 'decimal_places': '2', 'blank': 'True'}),
'date_created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'db_index': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'low_stock_threshold': ('django.db.models.fields.PositiveIntegerField', [], {'null': 'True', 'blank': 'True'}),
'num_allocated': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'num_in_stock': ('django.db.models.fields.PositiveIntegerField', [], {'null': 'True', 'blank': 'True'}),
'partner': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'stockrecords'", 'to': u"orm['partner.Partner']"}),
'partner_sku': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'price_currency': ('django.db.models.fields.CharField', [], {'default': "'GBP'", 'max_length': '12'}),
'price_excl_tax': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '12', 'decimal_places': '2', 'blank': 'True'}),
'price_retail': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '12', 'decimal_places': '2', 'blank': 'True'}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'stockrecords'", 'to': u"orm['catalogue.Product']"})
},
u'sites.site': {
'Meta': {'ordering': "('domain',)", 'object_name': 'Site', 'db_table': "'django_site'"},
'domain': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
}
}
complete_apps = ['oscar_support'] | 92.285333 | 246 | 0.57497 | 3,593 | 34,607 | 5.400779 | 0.077651 | 0.150271 | 0.173151 | 0.247359 | 0.817212 | 0.774852 | 0.750477 | 0.696418 | 0.661376 | 0.596032 | 0 | 0.012014 | 0.175051 | 34,607 | 375 | 247 | 92.285333 | 0.667694 | 0.002341 | 0 | 0.239669 | 0 | 0 | 0.603685 | 0.336587 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00551 | false | 0.002755 | 0.013774 | 0 | 0.027548 | 0.002755 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
db4a31c7635bd900d42bfe2e3d644c35a7789fb4 | 156 | py | Python | application.py | r2mars/r2mars.github.io | de718e682806ffbea9762b2ae4511ee4555687b2 | [
"MIT"
] | null | null | null | application.py | r2mars/r2mars.github.io | de718e682806ffbea9762b2ae4511ee4555687b2 | [
"MIT"
] | null | null | null | application.py | r2mars/r2mars.github.io | de718e682806ffbea9762b2ae4511ee4555687b2 | [
"MIT"
] | null | null | null | from flask import Flask, session, render_template
app = Flask(__name__)
# Main page
@app.route("/")
def index():
return render_template('index.html')
| 17.333333 | 49 | 0.717949 | 21 | 156 | 5.047619 | 0.714286 | 0.264151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147436 | 156 | 8 | 50 | 19.5 | 0.796992 | 0.057692 | 0 | 0 | 0 | 0 | 0.075862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
db4ce52165b02e24ed724663528a74597e979535 | 162 | py | Python | lambdatav3peggyk1/__init__.py/classnumcell.py | PeggyK1/lambdatav3 | 8b016bd4c278a5a9d5dbaaa25733bde92bae3b7d | [
"MIT"
] | null | null | null | lambdatav3peggyk1/__init__.py/classnumcell.py | PeggyK1/lambdatav3 | 8b016bd4c278a5a9d5dbaaa25733bde92bae3b7d | [
"MIT"
] | null | null | null | lambdatav3peggyk1/__init__.py/classnumcell.py | PeggyK1/lambdatav3 | 8b016bd4c278a5a9d5dbaaa25733bde92bae3b7d | [
"MIT"
] | null | null | null | import pandas as pd
class MyDataFrame(pd.MyDataFrame):
""" Reports number of cells """
def num_cells(self):
return self.shape[0] * self.shape[1] | 23.142857 | 44 | 0.660494 | 23 | 162 | 4.608696 | 0.73913 | 0.169811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015748 | 0.216049 | 162 | 7 | 44 | 23.142857 | 0.818898 | 0.141975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
db53d2461a718bba302e4782e560c97ec3251b6a | 93 | py | Python | foods/admin.py | Glucemy/Glucemy-back | c9fcf7996b3f13c67697aadd449e3e32afb1fa1b | [
"MIT"
] | null | null | null | foods/admin.py | Glucemy/Glucemy-back | c9fcf7996b3f13c67697aadd449e3e32afb1fa1b | [
"MIT"
] | null | null | null | foods/admin.py | Glucemy/Glucemy-back | c9fcf7996b3f13c67697aadd449e3e32afb1fa1b | [
"MIT"
] | null | null | null | from django.contrib import admin
from foods.models import Foods
admin.site.register(Foods)
| 15.5 | 32 | 0.817204 | 14 | 93 | 5.428571 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11828 | 93 | 5 | 33 | 18.6 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
db6c4da596bb1523cafcef59e8de2ce2d65da99b | 240 | py | Python | vendor/paypal/standard/pdt/forms.py | starsep/NewsBlur | 6c59416ca82377ca1bbc7d044890bdead3eba904 | [
"MIT"
] | 24 | 2016-08-06T18:10:54.000Z | 2022-03-04T11:47:39.000Z | vendor/paypal/standard/pdt/forms.py | starsep/NewsBlur | 6c59416ca82377ca1bbc7d044890bdead3eba904 | [
"MIT"
] | 21 | 2020-03-24T18:18:22.000Z | 2021-03-31T20:18:53.000Z | vendor/paypal/standard/pdt/forms.py | starsep/NewsBlur | 6c59416ca82377ca1bbc7d044890bdead3eba904 | [
"MIT"
] | 13 | 2017-03-28T02:35:32.000Z | 2022-02-21T23:36:15.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from paypal.standard.forms import PayPalStandardBaseForm
from paypal.standard.pdt.models import PayPalPDT
class PayPalPDTForm(PayPalStandardBaseForm):
class Meta:
model = PayPalPDT | 26.666667 | 56 | 0.754167 | 27 | 240 | 6.703704 | 0.740741 | 0.110497 | 0.198895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0.145833 | 240 | 9 | 57 | 26.666667 | 0.878049 | 0.175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9155a971b5da10cf5833703f3cb25d69e8548f7a | 46 | py | Python | ss/settings.py | jeffchen81/stock-starer | baf2128acdaa3e32aff3b2fc1f79816b0b4d1df6 | [
"MIT"
] | 4 | 2018-11-19T09:51:28.000Z | 2020-12-19T13:07:53.000Z | ss/settings.py | jeffchen81/stock-starer | baf2128acdaa3e32aff3b2fc1f79816b0b4d1df6 | [
"MIT"
] | 1 | 2021-06-01T22:57:02.000Z | 2021-06-01T22:57:02.000Z | ss/settings.py | jeffchen81/stock-starer | baf2128acdaa3e32aff3b2fc1f79816b0b4d1df6 | [
"MIT"
] | 4 | 2018-11-19T09:50:54.000Z | 2020-12-19T13:07:54.000Z | # -*- coding: utf-8 -*-
# another: Jeff.Chen
| 11.5 | 23 | 0.543478 | 6 | 46 | 4.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.195652 | 46 | 3 | 24 | 15.333333 | 0.648649 | 0.869565 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
915e58a613d9b7af8efa29c31e2abfca8e40eb8b | 137 | py | Python | brane-ide/kernels/bscript/bscript_kernel/__main__.py | romnn/brane | 03752edd85a09a5ffb817b9f6a0fa03c8e9b277a | [
"Apache-2.0"
] | null | null | null | brane-ide/kernels/bscript/bscript_kernel/__main__.py | romnn/brane | 03752edd85a09a5ffb817b9f6a0fa03c8e9b277a | [
"Apache-2.0"
] | null | null | null | brane-ide/kernels/bscript/bscript_kernel/__main__.py | romnn/brane | 03752edd85a09a5ffb817b9f6a0fa03c8e9b277a | [
"Apache-2.0"
] | null | null | null | from ipykernel.kernelapp import IPKernelApp
from . import BraneScriptKernel
IPKernelApp.launch_instance(kernel_class=BraneScriptKernel)
| 27.4 | 59 | 0.883212 | 14 | 137 | 8.5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072993 | 137 | 4 | 60 | 34.25 | 0.937008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
916aa484933ac5770f79eae97f6679a3ee08cbbb | 21,247 | py | Python | etl/parsers/etw/Microsoft_Windows_Hyper_V_VID.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 104 | 2020-03-04T14:31:31.000Z | 2022-03-28T02:59:36.000Z | etl/parsers/etw/Microsoft_Windows_Hyper_V_VID.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 7 | 2020-04-20T09:18:39.000Z | 2022-03-19T17:06:19.000Z | etl/parsers/etw/Microsoft_Windows_Hyper_V_VID.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 16 | 2020-03-05T18:55:59.000Z | 2022-03-01T10:19:28.000Z | # -*- coding: utf-8 -*-
"""
Microsoft-Windows-Hyper-V-VID
GUID : 5931d877-4860-4ee7-a95c-610a5f0d1407
"""
from construct import Int8sl, Int8ul, Int16ul, Int16sl, Int32sl, Int32ul, Int64sl, Int64ul, Bytes, Double, Float32l, Struct
from etl.utils import WString, CString, SystemTime, Guid
from etl.dtyp import Sid
from etl.parsers.etw.core import Etw, declare, guid
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1101, version=0)
class Microsoft_Windows_Hyper_V_VID_1101_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1102, version=0)
class Microsoft_Windows_Hyper_V_VID_1102_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1103, version=0)
class Microsoft_Windows_Hyper_V_VID_1103_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1104, version=0)
class Microsoft_Windows_Hyper_V_VID_1104_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1105, version=0)
class Microsoft_Windows_Hyper_V_VID_1105_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1106, version=0)
class Microsoft_Windows_Hyper_V_VID_1106_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1107, version=0)
class Microsoft_Windows_Hyper_V_VID_1107_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1108, version=0)
class Microsoft_Windows_Hyper_V_VID_1108_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1109, version=0)
class Microsoft_Windows_Hyper_V_VID_1109_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1110, version=0)
class Microsoft_Windows_Hyper_V_VID_1110_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=1111, version=0)
class Microsoft_Windows_Hyper_V_VID_1111_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=3000, version=0)
class Microsoft_Windows_Hyper_V_VID_3000_0(Etw):
pattern = Struct(
"PartitionId" / WString,
"Parameter0" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5001, version=0)
class Microsoft_Windows_Hyper_V_VID_5001_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5002, version=0)
class Microsoft_Windows_Hyper_V_VID_5002_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl,
"Parameter3" / Int64sl,
"Parameter4" / Int64sl,
"Parameter5" / Int64sl,
"Parameter6" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5003, version=0)
class Microsoft_Windows_Hyper_V_VID_5003_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5004, version=0)
class Microsoft_Windows_Hyper_V_VID_5004_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5005, version=0)
class Microsoft_Windows_Hyper_V_VID_5005_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5006, version=0)
class Microsoft_Windows_Hyper_V_VID_5006_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5007, version=0)
class Microsoft_Windows_Hyper_V_VID_5007_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5008, version=0)
class Microsoft_Windows_Hyper_V_VID_5008_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5009, version=0)
class Microsoft_Windows_Hyper_V_VID_5009_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5010, version=0)
class Microsoft_Windows_Hyper_V_VID_5010_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5011, version=0)
class Microsoft_Windows_Hyper_V_VID_5011_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5012, version=0)
class Microsoft_Windows_Hyper_V_VID_5012_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5013, version=0)
class Microsoft_Windows_Hyper_V_VID_5013_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5014, version=0)
class Microsoft_Windows_Hyper_V_VID_5014_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5017, version=0)
class Microsoft_Windows_Hyper_V_VID_5017_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5018, version=0)
class Microsoft_Windows_Hyper_V_VID_5018_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5019, version=0)
class Microsoft_Windows_Hyper_V_VID_5019_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5020, version=0)
class Microsoft_Windows_Hyper_V_VID_5020_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5021, version=0)
class Microsoft_Windows_Hyper_V_VID_5021_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5022, version=0)
class Microsoft_Windows_Hyper_V_VID_5022_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5023, version=0)
class Microsoft_Windows_Hyper_V_VID_5023_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5024, version=0)
class Microsoft_Windows_Hyper_V_VID_5024_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5025, version=0)
class Microsoft_Windows_Hyper_V_VID_5025_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5026, version=0)
class Microsoft_Windows_Hyper_V_VID_5026_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5029, version=0)
class Microsoft_Windows_Hyper_V_VID_5029_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5030, version=0)
class Microsoft_Windows_Hyper_V_VID_5030_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5031, version=0)
class Microsoft_Windows_Hyper_V_VID_5031_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5032, version=0)
class Microsoft_Windows_Hyper_V_VID_5032_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5033, version=0)
class Microsoft_Windows_Hyper_V_VID_5033_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5034, version=0)
class Microsoft_Windows_Hyper_V_VID_5034_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5035, version=0)
class Microsoft_Windows_Hyper_V_VID_5035_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5036, version=0)
class Microsoft_Windows_Hyper_V_VID_5036_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl,
"Parameter3" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5037, version=0)
class Microsoft_Windows_Hyper_V_VID_5037_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5038, version=0)
class Microsoft_Windows_Hyper_V_VID_5038_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul,
"PartitionFriendlyName" / WString,
"PartitionName" / WString,
"Consumed" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5039, version=0)
class Microsoft_Windows_Hyper_V_VID_5039_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul,
"PartitionFriendlyName" / WString,
"PartitionName" / WString,
"Consumed" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5040, version=0)
class Microsoft_Windows_Hyper_V_VID_5040_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5041, version=0)
class Microsoft_Windows_Hyper_V_VID_5041_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5042, version=0)
class Microsoft_Windows_Hyper_V_VID_5042_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul,
"PartitionFriendlyName" / WString,
"PartitionName" / WString,
"Consumed" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5043, version=0)
class Microsoft_Windows_Hyper_V_VID_5043_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul,
"PartitionFriendlyName" / WString,
"PartitionName" / WString,
"Consumed" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5044, version=0)
class Microsoft_Windows_Hyper_V_VID_5044_0(Etw):
pattern = Struct(
"PartitionId" / WString,
"Parameter0" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5045, version=0)
class Microsoft_Windows_Hyper_V_VID_5045_0(Etw):
pattern = Struct(
"PartitionId" / WString,
"Parameter0" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5046, version=0)
class Microsoft_Windows_Hyper_V_VID_5046_0(Etw):
pattern = Struct(
"LowAddress" / Int64ul,
"HighAddress" / Int64ul,
"SkipBytes" / Int64ul,
"TotalBytes" / Int64ul,
"CacheType" / Int32ul,
"NodeIndex" / Int8ul,
"Flags" / Int32ul,
"MemoryPartition" / Int64ul,
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5047, version=0)
class Microsoft_Windows_Hyper_V_VID_5047_0(Etw):
pattern = Struct(
"Mdl" / Int64ul,
"TotalBytes" / Int64ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5048, version=0)
class Microsoft_Windows_Hyper_V_VID_5048_0(Etw):
pattern = Struct(
"MbpArraySize" / Int64ul,
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5049, version=0)
class Microsoft_Windows_Hyper_V_VID_5049_0(Etw):
pattern = Struct(
"Status" / Int64ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5050, version=0)
class Microsoft_Windows_Hyper_V_VID_5050_0(Etw):
pattern = Struct(
"PageCountToBack" / Int64ul,
"KsrBlockId" / Int64ul,
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5051, version=0)
class Microsoft_Windows_Hyper_V_VID_5051_0(Etw):
pattern = Struct(
"MbpArraySize" / Int64ul,
"KsrRunCount" / Int32ul,
"Status" / Int64ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5052, version=0)
class Microsoft_Windows_Hyper_V_VID_5052_0(Etw):
pattern = Struct(
"KsrMemoryRunCount" / Int32ul,
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5053, version=0)
class Microsoft_Windows_Hyper_V_VID_5053_0(Etw):
pattern = Struct(
"Status" / Int64ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5054, version=0)
class Microsoft_Windows_Hyper_V_VID_5054_0(Etw):
pattern = Struct(
"MbpArraySize" / Int64ul,
"KsrRunCount" / Int32ul,
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5056, version=0)
class Microsoft_Windows_Hyper_V_VID_5056_0(Etw):
pattern = Struct(
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5057, version=0)
class Microsoft_Windows_Hyper_V_VID_5057_0(Etw):
pattern = Struct(
"Status" / Int64ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5058, version=0)
class Microsoft_Windows_Hyper_V_VID_5058_0(Etw):
pattern = Struct(
"KsrPersisted" / Int8sl,
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5059, version=0)
class Microsoft_Windows_Hyper_V_VID_5059_0(Etw):
pattern = Struct(
"Status" / Int64ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5060, version=0)
class Microsoft_Windows_Hyper_V_VID_5060_0(Etw):
pattern = Struct(
"KsrPersisted" / Int8sl,
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5061, version=0)
class Microsoft_Windows_Hyper_V_VID_5061_0(Etw):
pattern = Struct(
"Status" / Int64ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5062, version=0)
class Microsoft_Windows_Hyper_V_VID_5062_0(Etw):
pattern = Struct(
"StartGpaPage" / Int64ul,
"StartMbp" / Int64ul,
"MbpCount" / Int64ul,
"InterceptOverrideFlags" / Int32ul,
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5063, version=0)
class Microsoft_Windows_Hyper_V_VID_5063_0(Etw):
pattern = Struct(
"Status" / Int64ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5064, version=0)
class Microsoft_Windows_Hyper_V_VID_5064_0(Etw):
pattern = Struct(
"FirstPage" / Int64ul,
"LastPage" / Int64ul,
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5065, version=0)
class Microsoft_Windows_Hyper_V_VID_5065_0(Etw):
pattern = Struct(
"Status" / Int64ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5066, version=0)
class Microsoft_Windows_Hyper_V_VID_5066_0(Etw):
pattern = Struct(
"PartitionGuid" / Guid
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5068, version=0)
class Microsoft_Windows_Hyper_V_VID_5068_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5069, version=0)
class Microsoft_Windows_Hyper_V_VID_5069_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5070, version=0)
class Microsoft_Windows_Hyper_V_VID_5070_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5071, version=0)
class Microsoft_Windows_Hyper_V_VID_5071_0(Etw):
pattern = Struct(
"Parameter0" / Int64sl,
"Parameter1" / Int64sl,
"Parameter2" / Int64sl
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5072, version=0)
class Microsoft_Windows_Hyper_V_VID_5072_0(Etw):
pattern = Struct(
"TotalPages" / Int64ul,
"PlatformDirected" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5073, version=0)
class Microsoft_Windows_Hyper_V_VID_5073_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul,
"PartitionFriendlyName" / WString,
"PartitionName" / WString,
"Consumed" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5074, version=0)
class Microsoft_Windows_Hyper_V_VID_5074_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul,
"PartitionFriendlyName" / WString,
"PartitionName" / WString,
"Consumed" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5075, version=0)
class Microsoft_Windows_Hyper_V_VID_5075_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul,
"PartitionFriendlyName" / WString,
"PartitionName" / WString,
"Consumed" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5076, version=0)
class Microsoft_Windows_Hyper_V_VID_5076_0(Etw):
pattern = Struct(
"PhysicalAddress" / Int64ul,
"PlatformDirected" / Int8ul
)
@declare(guid=guid("5931d877-4860-4ee7-a95c-610a5f0d1407"), event_id=5077, version=0)
class Microsoft_Windows_Hyper_V_VID_5077_0(Etw):
pattern = Struct(
"TotalPages" / Int64ul,
"PlatformDirected" / Int8ul,
"PartitionFriendlyName" / WString,
"PartitionName" / WString,
"Consumed" / Int8ul
)
| 29.105479 | 123 | 0.683532 | 2,485 | 21,247 | 5.610463 | 0.066801 | 0.096399 | 0.126524 | 0.132549 | 0.905968 | 0.904174 | 0.899512 | 0.899512 | 0.662172 | 0.655645 | 0 | 0.194475 | 0.190709 | 21,247 | 729 | 124 | 29.145405 | 0.616342 | 0.004518 | 0 | 0.502693 | 0 | 0 | 0.257923 | 0.150317 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007181 | 0 | 0.305206 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
91e01cf677b2ce229a1bea73d8fb8c795ccce440 | 248 | py | Python | spider1/admin.py | EricMbuthia/SeleniumDjangoWebscraping | 27954bcf02b895b3c1001f5924433d6aaf3f195e | [
"MIT"
] | null | null | null | spider1/admin.py | EricMbuthia/SeleniumDjangoWebscraping | 27954bcf02b895b3c1001f5924433d6aaf3f195e | [
"MIT"
] | null | null | null | spider1/admin.py | EricMbuthia/SeleniumDjangoWebscraping | 27954bcf02b895b3c1001f5924433d6aaf3f195e | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import ScrapeRecordsInventory,ScrapeRecords,UndoneNotices
# Register your models here.
admin.site.register(ScrapeRecordsInventory)
admin.site.register(ScrapeRecords)
admin.site.register(UndoneNotices) | 31 | 70 | 0.858871 | 27 | 248 | 7.888889 | 0.481481 | 0.126761 | 0.239437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068548 | 248 | 8 | 71 | 31 | 0.922078 | 0.104839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
91ea2aedc62c92db0056df0c19a6e38619de6104 | 148 | py | Python | src-python/main.py | DevParapalli/ca-adhyaaya-svelte | 9a987b04c9c4dbcd9f30f92fa136eaed426fe356 | [
"MIT"
] | null | null | null | src-python/main.py | DevParapalli/ca-adhyaaya-svelte | 9a987b04c9c4dbcd9f30f92fa136eaed426fe356 | [
"MIT"
] | 1 | 2022-02-27T17:34:16.000Z | 2022-02-27T19:00:33.000Z | src-python/main.py | DevParapalli/ca-adhyaaya-svelte | 9a987b04c9c4dbcd9f30f92fa136eaed426fe356 | [
"MIT"
] | 1 | 2022-02-27T15:12:09.000Z | 2022-02-27T15:12:09.000Z | import clone_collection
import create_CA_code_mapping
import sync_refferal_codes
import create_mailing_csv_from_registration
import update_rankings
| 24.666667 | 43 | 0.932432 | 21 | 148 | 6.047619 | 0.761905 | 0.188976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067568 | 148 | 5 | 44 | 29.6 | 0.92029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
91ec70bde5c7ecb4998d63ba2e3ad759a9cac2d6 | 4,913 | py | Python | javifi.py | LEGEND-LX/PYTHONBOT.py.pkg | 897b05528990acf76fbb2a05538429cd5d178733 | [
"CC0-1.0"
] | 2 | 2021-09-09T06:50:21.000Z | 2021-10-01T16:59:30.000Z | javifi.py | LEGEND-LX/PYTHONBOT.py.pkg | 897b05528990acf76fbb2a05538429cd5d178733 | [
"CC0-1.0"
] | null | null | null | javifi.py | LEGEND-LX/PYTHONBOT.py.pkg | 897b05528990acf76fbb2a05538429cd5d178733 | [
"CC0-1.0"
] | null | null | null | import datetime
import asyncio
from telethon import events
from telethon.errors.rpcerrorlist import YouBlockedUserError, UserAlreadyParticipantError
from telethon.tl.functions.account import UpdateNotifySettingsRequest
from telethon.tl.functions.messages import ImportChatInviteRequest
from LEGENDBOT.utils import admin_cmd, edit_or_reply, sudo_cmd
import time
from userbot import ALIVE_NAME
naam = str(ALIVE_NAME)
bot = "@ceowhitehatcracks"
bluebot = "@ceowhitehatcracks"
freebot = "@ceowhitehatcracks"
@bot.on(admin_cmd("jav ?(.*)"))
async def _(event):
if event.fwd_from:
return
sysarg = event.pattern_match.group(1)
if sysarg == "h":
async with bot.conversation(bot) as conv:
try:
await conv.send_message("/start")
response = await conv.get_response()
await conv.send_message("/hello")
audio = await conv.get_response()
await bot.send_file(event.chat_id, audio, caption="➡️**TO BOSS : **" + naam +"\n`Check This Bot out` [Sensible Userbot](ttps://github.com/spandey112/SensibleUserbot)")
await event.delete()
except YouBlockedUserError:
await event.edit("**Error:** `unblock` @Ceowhitehatcracks `and retry!")
elif sysarg == "ss":
async with bot.conversation(bot) as conv:
try:
await conv.send_message("/start")
response = await conv.get_response()
await conv.send_message("/ss")
audio = await conv.get_response()
await bot.send_file(event.chat_id, audio, caption="**CREDITS : Dr.jr Genesis**\n`Check out` [Sensible Userbot Support](t.me/sensible_userbot)")
await event.delete()
except YouBlockedUserError:
await event.edit("**Error:** `unblock` @ceowhitehatcracks `and retry!`")
elif sysarg == "--h":
async with bot.conversation(bot) as conv:
try:
await conv.send_message("/start")
response = await conv.get_response()
await conv.send_message("/help")
audio = await conv.get_response()
await bot.send_file(event.chat_id, audio, caption="**Dr.Bot Is Here To Help**\n`Check out` [Sensible Userbot Support](t.me/sensible_userbot)")
await event.delete()
except YouBlockedUserError:
await event.edit("**Error:** `unblock` @ceowhitehatcracks `and retry!`")
elif sysarg == "npic":
async with bot.conversation(bot) as conv:
try:
await conv.send_message("/start")
response = await conv.get_response()
await conv.send_message("/nudepic")
audio = await conv.get_response()
await bot.send_file(event.chat_id, audio, caption="**For" + naam +" **\n`Check out` [Sensible Userbot Support](t.me/sensible_userbot)")
await event.delete()
except YouBlockedUserError:
await event.edit("**Error:** `unblock` @sensible_userbot `and retry!`")
elif sysarg == "rs":
async with bot.conversation(bot) as conv:
try:
await conv.send_message("/start")
response = await conv.get_response()
await conv.send_message("/rs")
audio = await conv.get_response()
await bot.send_file(event.chat_id, audio, caption="**CREDITS : @CEOWHITEHATCRACKS**\n`Check out` [Sensible Userbot Support](t.me/sensible_userbot)")
await event.delete()
except YouBlockedUserError:
await event.edit("**Error:** `unblock` @ceowhitehatcracks `and retry!`")
elif sysarg == "ib":
async with bot.conversation(bot) as conv:
try:
await conv.send_message("/start")
response = await conv.get_response()
await conv.send_message("/ib")
audio = await conv.get_response()
await bot.send_file(event.chat_id, audio, caption="**CREDITS : Ceowhitehatcracks**\n`Check out` [Sensible Userbot](ttps://github.com/spandey112/SensibleUserbot)")
await event.delete()
except YouBlockedUserError:
await event.edit("**Error:** `unblock` @Ceowhitehatcracks `and retry!`")
elif sysarg == "acc":
async with bot.conversation(bot) as conv:
try:
await conv.send_message("/start")
response = await conv.get_response()
await conv.send_message("/acc")
audio = await conv.get_response()
await bot.send_file(event.chat_id, audio)
await event.delete()
except YouBlockedUserError:
await event.edit("**Error:** `unblock` @ceowhitehatcracks `and retry!`")
else:
await brog.send_message(event.chat_id, "**INVALID** -- FOR HELP COMMAND IS **.jav --h**")
await event.delete()
| 46.349057 | 181 | 0.615103 | 547 | 4,913 | 5.420475 | 0.193784 | 0.084992 | 0.061383 | 0.094435 | 0.758853 | 0.758853 | 0.758853 | 0.758853 | 0.758853 | 0.758853 | 0 | 0.00194 | 0.265418 | 4,913 | 105 | 182 | 46.790476 | 0.819063 | 0 | 0 | 0.55102 | 0 | 0.05102 | 0.228059 | 0.05844 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.091837 | 0 | 0.102041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
37e7b27a67560fa5bdb43b33348639adcb3b687b | 2,457 | py | Python | tests/_apis/league_of_legends/test_ChampionMasteryApiV4.py | TheBoringBakery/Riot-Watcher | 6e05fffe127530a75fd63e67da37ba81489fd4fe | [
"MIT"
] | 489 | 2015-01-04T22:49:51.000Z | 2022-03-28T03:15:54.000Z | tests/_apis/league_of_legends/test_ChampionMasteryApiV4.py | TheBoringBakery/Riot-Watcher | 6e05fffe127530a75fd63e67da37ba81489fd4fe | [
"MIT"
] | 162 | 2015-02-09T22:10:40.000Z | 2022-02-22T13:48:50.000Z | tests/_apis/league_of_legends/test_ChampionMasteryApiV4.py | TheBoringBakery/Riot-Watcher | 6e05fffe127530a75fd63e67da37ba81489fd4fe | [
"MIT"
] | 221 | 2015-01-07T18:01:57.000Z | 2022-03-26T21:18:48.000Z | from unittest.mock import MagicMock
import pytest
from riotwatcher._apis.league_of_legends import ChampionMasteryApiV4
@pytest.mark.lol
@pytest.mark.unit
class TestChampionMasteryApiV4:
def test_by_summoner(self):
mock_base_api = MagicMock()
expected_return = object()
mock_base_api.raw_request.return_value = expected_return
mastery = ChampionMasteryApiV4(mock_base_api)
region = "afas"
encrypted_summoner_id = "15462"
ret = mastery.by_summoner(region, encrypted_summoner_id)
mock_base_api.raw_request.assert_called_once_with(
ChampionMasteryApiV4.__name__,
mastery.by_summoner.__name__,
region,
f"https://{region}.api.riotgames.com/lol/champion-mastery/v4/champion-masteries/by-summoner/{encrypted_summoner_id}",
{},
)
assert ret is expected_return
def test_summoner_by_champion(self):
mock_base_api = MagicMock()
expected_return = object()
mock_base_api.raw_request.return_value = expected_return
mastery = ChampionMasteryApiV4(mock_base_api)
region = "fsgs"
encrypted_summoner_id = "53526"
champion_id = 7
ret = mastery.by_summoner_by_champion(
region, encrypted_summoner_id, champion_id
)
mock_base_api.raw_request.assert_called_once_with(
ChampionMasteryApiV4.__name__,
mastery.by_summoner_by_champion.__name__,
region,
f"https://{region}.api.riotgames.com/lol/champion-mastery/v4/champion-masteries/by-summoner/{encrypted_summoner_id}/by-champion/{champion_id}",
{},
)
assert ret is expected_return
def test_scored_by_summoner(self):
mock_base_api = MagicMock()
expected_return = object()
mock_base_api.raw_request.return_value = expected_return
mastery = ChampionMasteryApiV4(mock_base_api)
region = "fsgs"
encrypted_summoner_id = "6243"
ret = mastery.scores_by_summoner(region, encrypted_summoner_id)
mock_base_api.raw_request.assert_called_once_with(
ChampionMasteryApiV4.__name__,
mastery.scores_by_summoner.__name__,
region,
f"https://{region}.api.riotgames.com/lol/champion-mastery/v4/scores/by-summoner/{encrypted_summoner_id}",
{},
)
assert ret is expected_return
| 32.328947 | 155 | 0.673586 | 273 | 2,457 | 5.626374 | 0.205128 | 0.0625 | 0.085938 | 0.054688 | 0.785156 | 0.761068 | 0.761068 | 0.761068 | 0.734375 | 0.734375 | 0 | 0.014024 | 0.245421 | 2,457 | 75 | 156 | 32.76 | 0.814455 | 0 | 0 | 0.508772 | 0 | 0.052632 | 0.154253 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.122807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
530e0e5beb9d0ac8500102c7178a5aaa8ee33885 | 245 | py | Python | src/flask_wtf/__init__.py | Bonifacio2/flask-wtf | 7fc4a618ecc7880bcb7b03f69fe58d340be986c7 | [
"BSD-3-Clause"
] | null | null | null | src/flask_wtf/__init__.py | Bonifacio2/flask-wtf | 7fc4a618ecc7880bcb7b03f69fe58d340be986c7 | [
"BSD-3-Clause"
] | null | null | null | src/flask_wtf/__init__.py | Bonifacio2/flask-wtf | 7fc4a618ecc7880bcb7b03f69fe58d340be986c7 | [
"BSD-3-Clause"
] | null | null | null | from .csrf import CSRFProtect
from .csrf import CsrfProtect
from .form import FlaskForm
from .form import Form
from .recaptcha import Recaptcha
from .recaptcha import RecaptchaField
from .recaptcha import RecaptchaWidget
__version__ = "0.15.1"
| 24.5 | 38 | 0.820408 | 32 | 245 | 6.15625 | 0.40625 | 0.19797 | 0.28934 | 0.253807 | 0.274112 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018779 | 0.130612 | 245 | 9 | 39 | 27.222222 | 0.906103 | 0 | 0 | 0 | 0 | 0 | 0.02449 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.875 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
531940a33e3eb67f74d98bbb8c5bb50230bc5a41 | 38 | py | Python | pandapower/converter/powerfactory/__init__.py | lschmelting/pandapower | 1f24eb4946366bb761c26f529149e941da2d6fb0 | [
"BSD-3-Clause"
] | 2 | 2019-11-01T11:01:41.000Z | 2022-02-07T12:55:55.000Z | pandapower/converter/powerfactory/__init__.py | lschmelting/pandapower | 1f24eb4946366bb761c26f529149e941da2d6fb0 | [
"BSD-3-Clause"
] | null | null | null | pandapower/converter/powerfactory/__init__.py | lschmelting/pandapower | 1f24eb4946366bb761c26f529149e941da2d6fb0 | [
"BSD-3-Clause"
] | null | null | null | from .export_pfd_to_pp import from_pfd | 38 | 38 | 0.894737 | 8 | 38 | 3.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
531981d7099cf967674c939fc95b4caed3a6ee90 | 2,579 | py | Python | tests/test_configure_logging.py | pedroburon/python-envtools | c1f00c2f2df3cb01e55a31c3616ebd8f6bd8dda9 | [
"MIT"
] | 3 | 2017-11-28T14:53:54.000Z | 2019-11-05T17:50:21.000Z | tests/test_configure_logging.py | pedroburon/python-envtools | c1f00c2f2df3cb01e55a31c3616ebd8f6bd8dda9 | [
"MIT"
] | null | null | null | tests/test_configure_logging.py | pedroburon/python-envtools | c1f00c2f2df3cb01e55a31c3616ebd8f6bd8dda9 | [
"MIT"
] | null | null | null |
import unittest
from envtools import override_environment
from envtools.logging_config import configure_logging
class TestConfigureLoggingLevel(unittest.TestCase):
@override_environment(LOGGING_LEVEL_module='DEBUG')
def test_change_level(self):
result = configure_logging({
'loggers': {
'module': {
'handlers': ['console'],
'level': 'INFO',
},
},
})
self.assertEqual(
{
'loggers': {
'module': {
'handlers': ['console'],
'level': 'DEBUG',
},
},
},
result
)
@override_environment(LOGGING_LEVEL_module='INFO')
def test_maintain_level(self):
result = configure_logging({
'loggers': {
'module': {
'handlers': ['console'],
'level': 'INFO',
},
},
})
self.assertEqual(
{
'loggers': {
'module': {
'handlers': ['console'],
'level': 'INFO',
},
},
},
result
)
@override_environment(LOGGING_LEVEL_module_submodule_subsub='DEBUG')
def test_dotpath_level(self):
result = configure_logging({
'loggers': {
'module.submodule.subsub': {
'handlers': ['console'],
'level': 'INFO',
},
},
})
self.assertEqual(
{
'loggers': {
'module.submodule.subsub': {
'handlers': ['console'],
'level': 'DEBUG',
},
},
},
result
)
@override_environment(LOGGING_LEVEL_module_submodule_subsub='DEBUG')
def test_non_existent(self):
result = configure_logging({
'loggers': {
'module': {
'handlers': ['console'],
'level': 'INFO',
},
},
})
self.assertEqual(
{
'loggers': {
'module': {
'handlers': ['console'],
'level': 'INFO',
},
},
},
result
)
| 25.79 | 72 | 0.373401 | 142 | 2,579 | 6.56338 | 0.21831 | 0.111588 | 0.171674 | 0.180258 | 0.786481 | 0.746781 | 0.746781 | 0.667382 | 0.611588 | 0.611588 | 0 | 0 | 0.511439 | 2,579 | 99 | 73 | 26.050505 | 0.739683 | 0 | 0 | 0.568182 | 0 | 0 | 0.136152 | 0.017843 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.045455 | false | 0 | 0.034091 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
533a79bf8ba1df68d29b739965c4e3117edbb65c | 117 | py | Python | ThisIsTheProject/app1/main.py | AlexandreSiedschlag/ProMaxima | d74865030214ec708af8d268965d709e73de4199 | [
"Unlicense"
] | null | null | null | ThisIsTheProject/app1/main.py | AlexandreSiedschlag/ProMaxima | d74865030214ec708af8d268965d709e73de4199 | [
"Unlicense"
] | null | null | null | ThisIsTheProject/app1/main.py | AlexandreSiedschlag/ProMaxima | d74865030214ec708af8d268965d709e73de4199 | [
"Unlicense"
] | null | null | null | from functions import do_something
do_something()
#Login: admin
#Password: 1234
#email = alexandresieds@gmail.com
| 13 | 34 | 0.786325 | 15 | 117 | 6 | 0.866667 | 0.244444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 0.128205 | 117 | 8 | 35 | 14.625 | 0.843137 | 0.495727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
534bd579a21dfdeaa9b1ff2eb464aad6f9ec2d0e | 354 | py | Python | ddi_search_engine/Bio/expressions/__init__.py | dbmi-pitt/DIKB-Evidence-analytics | 9ffd629db30c41ced224ff2afdf132ce9276ae3f | [
"MIT"
] | 3 | 2015-06-08T17:58:54.000Z | 2022-03-10T18:49:44.000Z | ddi_search_engine/Bio/expressions/__init__.py | dbmi-pitt/DIKB-Evidence-analytics | 9ffd629db30c41ced224ff2afdf132ce9276ae3f | [
"MIT"
] | null | null | null | ddi_search_engine/Bio/expressions/__init__.py | dbmi-pitt/DIKB-Evidence-analytics | 9ffd629db30c41ced224ff2afdf132ce9276ae3f | [
"MIT"
] | null | null | null | # This is a Python module.
import warnings
warnings.warn("Bio.expressions was deprecated, as it does not work with recent versions of mxTextTools. If you want to continue to use this module, please get in contact with the Biopython developers at biopython-dev@biopython.org to avoid permanent removal of this module from Biopython", DeprecationWarning)
| 70.8 | 309 | 0.813559 | 55 | 354 | 5.236364 | 0.781818 | 0.069444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144068 | 354 | 4 | 310 | 88.5 | 0.950495 | 0.067797 | 0 | 0 | 0 | 0.5 | 0.829268 | 0.082317 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
536daa8cef5714ea9320732f7aa943945ccd3067 | 271 | py | Python | visualpriors/__init__.py | memmelma/visual-prior | 6b9c65f291c587fcbb3fcc3f61f76cdd1c3eb175 | [
"MIT"
] | 1 | 2022-01-13T17:08:51.000Z | 2022-01-13T17:08:51.000Z | visualpriors/__init__.py | memmelma/visual-prior | 6b9c65f291c587fcbb3fcc3f61f76cdd1c3eb175 | [
"MIT"
] | null | null | null | visualpriors/__init__.py | memmelma/visual-prior | 6b9c65f291c587fcbb3fcc3f61f76cdd1c3eb175 | [
"MIT"
] | null | null | null | from .transforms import representation_transform, multi_representation_transform, max_coverage_featureset_transform
from .transforms import feature_readout, multi_feature_readout
from .transforms import get_networks, get_viable_feature_tasks, get_max_coverate_featuresets | 90.333333 | 115 | 0.911439 | 33 | 271 | 7 | 0.515152 | 0.181818 | 0.25974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059041 | 271 | 3 | 116 | 90.333333 | 0.905882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
536e744953d2f364f8c3c9126e16e25dd49ebd6f | 14,901 | py | Python | target_describe/target_describe.py | DanielR59/target_description | 51e0f7a7ece1b45a55f1eed17abdb31532c5b2c2 | [
"MIT"
] | null | null | null | target_describe/target_describe.py | DanielR59/target_description | 51e0f7a7ece1b45a55f1eed17abdb31532c5b2c2 | [
"MIT"
] | null | null | null | target_describe/target_describe.py | DanielR59/target_description | 51e0f7a7ece1b45a55f1eed17abdb31532c5b2c2 | [
"MIT"
] | null | null | null | from typing import List, Optional, Union
from typing_extensions import Literal
import pandas as pd
from .utils import (
calculate_bins,
get_variable_and_target,
plot_numerical_variable,
sample_and_get_distribution,
select_categorical_text,
select_numeric,
plot_variable,
calculate_distribution,
)
class targetDescribe:
def __init__(
self,
data: pd.DataFrame,
target: Union[pd.Series, str],
problem: Literal["binary_classification", "regression"],
max_categories: int = 30,
target_described: Optional[str] = None,
nbins: int = 15,
) -> None:
__target_in_df = False
if isinstance(target, str):
try:
self._target_name = target
self.target = data[target].copy()
__target_in_df = True
except KeyError:
raise KeyError(f"{target} not in DataFrame")
else:
self._target_name = target.name
self.target = target.copy()
if target_described:
self.target_value_described = target_described
else:
self.target_value_described = str(self.target.unique()[-1])
self.nbins = nbins
self.max_categories = max_categories
self.data = data.copy()
self.problem = problem
self.split_variables()
self.numeric_variables = self._append_target(
variables=self.numeric_variables, target_in_df=__target_in_df
)
self.categorical_variables = self._append_target(
variables=self.categorical_variables, target_in_df=__target_in_df
)
def split_variables(self) -> None:
self.numeric_variables = select_numeric(self.data)
self.categorical_variables = select_categorical_text(self.data)
def _append_target(
self, variables: pd.DataFrame, target_in_df: bool
) -> pd.DataFrame:
if target_in_df:
if self.target.name in list(variables.columns):
return variables
else:
return pd.concat([variables, self.target], axis=1)
else:
return pd.concat([variables, self.target], axis=1)
def all_associations(
self,
target_value_described: Optional[str] = None,
export: bool = False,
max_categories: Optional[int] = None,
nbins: Optional[int] = None,
random_state: Optional[int] = None,
nbins_round_2: Optional[dict] = None,
sort_by: Literal["rows", "variable",
"target_asc", "target_desc"] = "rows"
):
if nbins:
self.nbins = nbins
if max_categories:
self.max_categories = max_categories
if target_value_described:
self.target_value_described = target_value_described
if sort_by not in ["rows", "variable", "target_asc", "target_desc"]:
print("Incorrect sort_by option using default")
sort_by = "rows"
if self.problem == "binary_classification":
if self.target.nunique() != 2:
raise ("Not binary target")
numeric_columns = [
name
for name in list(self.numeric_variables.columns)
if name not in [self._target_name]
]
categorical_columns = [
name
for name in list(self.categorical_variables.columns)
if name not in [self._target_name]
]
for nombre in numeric_columns:
if self.numeric_variables[nombre].dtype in ["int64", "int32", "int16"]:
num_categorias = self.numeric_variables[nombre].nunique()
if num_categorias <= self.max_categories:
proporcion = calculate_distribution(
df=self.numeric_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
sort_by=sort_by
)
plot_variable(
df=proporcion,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
export=export,
)
else:
proporcion = sample_and_get_distribution(
df=self.numeric_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
sample_size=self.max_categories,
random_state=random_state,
)
plot_variable(
df=proporcion,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
export=export,
)
elif self.numeric_variables[nombre].dtype in ["float64", "float32"]:
proporcion, counts, bins = calculate_bins(
df=self.numeric_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
nbins=self.nbins,
)
plot_numerical_variable(
get_variable_and_target(
self.numeric_variables, nombre, self._target_name
),
proporcion,
counts,
bins,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
nbins=self.nbins,
nbins_round_2=nbins_round_2,
export=export,
)
for nombre in categorical_columns:
if self.categorical_variables[nombre].dtype == "object":
num_categorias = self.categorical_variables[nombre].nunique(
)
if num_categorias <= self.max_categories:
proporcion = calculate_distribution(
df=self.categorical_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
sort_by=sort_by
)
plot_variable(
df=proporcion,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
export=export,
)
else:
proporcion = sample_and_get_distribution(
df=self.categorical_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
sample_size=self.max_categories,
random_state=random_state,
)
plot_variable(
df=proporcion,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
export=export,
)
def describe_some(self, columns: List[str], target_value_described: Optional[str] = None, export: bool = False, max_categories: Optional[int] = None, nbins: Optional[int] = None, random_state: Optional[int] = None, nbins_round_2: Optional[dict] = None, sort_by: Literal["rows", "variable", "target_asc", "target_desc"] = "rows"):
if nbins:
self.nbins = nbins
if max_categories:
self.max_categories = max_categories
if target_value_described:
self.target_value_described = target_value_described
if sort_by not in ["rows", "variable", "target_asc", "target_desc"]:
print("Incorrect sort_by option using default")
sort_by = "rows"
if self.problem == "binary_classification":
if self.target.nunique() != 2:
raise ("Not binary target")
for nombre in columns:
if nombre in self.numeric_variables.columns:
if self.numeric_variables[nombre].dtype in ["int64", "int32", "int16"]:
num_categorias = self.numeric_variables[nombre].nunique(
)
if num_categorias <= self.max_categories:
proporcion = calculate_distribution(
df=self.numeric_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
sort_by=sort_by
)
plot_variable(
df=proporcion,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
export=export,
)
else:
proporcion = sample_and_get_distribution(
df=self.numeric_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
sample_size=self.max_categories,
random_state=random_state,
)
plot_variable(
df=proporcion,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
export=export,
)
elif self.numeric_variables[nombre].dtype in ["float64", "float32"]:
proporcion, counts, bins = calculate_bins(
df=self.numeric_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
nbins=self.nbins,
)
plot_numerical_variable(
get_variable_and_target(
self.numeric_variables, nombre, self._target_name
),
proporcion,
counts,
bins,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
nbins=self.nbins,
nbins_round_2=nbins_round_2,
export=export,
)
elif nombre in self.categorical_variables.columns:
if self.categorical_variables[nombre].dtype == "object":
num_categorias = self.categorical_variables[nombre].nunique(
)
if num_categorias <= self.max_categories:
proporcion = calculate_distribution(
df=self.categorical_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
sort_by=sort_by
)
plot_variable(
df=proporcion,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
export=export,
)
else:
proporcion = sample_and_get_distribution(
df=self.categorical_variables,
variable=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
sample_size=self.max_categories,
random_state=random_state,
)
plot_variable(
df=proporcion,
variable_name=nombre,
target_name=self._target_name,
target_value_described=self.target_value_described,
export=export,
)
else:
print(f"{nombre} no esta en el dataframe cainal")
if __name__ == "__main__":
df = pd.read_csv(
"https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv"
)
hola = df["Survived"].astype(str)
df.drop(axis=1, labels="Survived", inplace=True)
# print(df.columns)
# a = targetDescribe(data=df, target="Survived", problem="binary_classification")
b = targetDescribe(data=df, target=hola, problem="binary_classification")
b.all_associations(
max_categories=10, export=True,
)
# b.all_associations(export=True, target_value_described="0")
| 41.856742 | 333 | 0.485471 | 1,236 | 14,901 | 5.521845 | 0.109223 | 0.086447 | 0.149451 | 0.087912 | 0.767033 | 0.753553 | 0.736703 | 0.72 | 0.72 | 0.695678 | 0 | 0.004762 | 0.45044 | 14,901 | 355 | 334 | 41.974648 | 0.828673 | 0.010536 | 0 | 0.634868 | 0 | 0 | 0.039824 | 0.005699 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016447 | false | 0 | 0.013158 | 0 | 0.042763 | 0.009868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
725e9cc850d6955931097cf0184c06b23e2acb03 | 269 | py | Python | notes/design/low-level/case-studies/auction-system/auction/commands/AbstractCommand.py | Anmol-Singh-Jaggi/interview-notes | 65af75e2b5725894fa5e13bb5cd9ecf152a0d652 | [
"MIT"
] | 6 | 2020-07-05T05:15:19.000Z | 2021-01-24T20:17:14.000Z | notes/design/low-level/case-studies/auction-system/auction/commands/AbstractCommand.py | Anmol-Singh-Jaggi/interview-notes | 65af75e2b5725894fa5e13bb5cd9ecf152a0d652 | [
"MIT"
] | null | null | null | notes/design/low-level/case-studies/auction-system/auction/commands/AbstractCommand.py | Anmol-Singh-Jaggi/interview-notes | 65af75e2b5725894fa5e13bb5cd9ecf152a0d652 | [
"MIT"
] | 2 | 2020-09-14T06:46:37.000Z | 2021-06-15T09:17:21.000Z | from abc import ABC, abstractmethod
from model.AuctionSystem import AuctionSystem
class AbstractCommand(ABC):
def __init__(self, auction_system: AuctionSystem):
self.auction_system = auction_system
@abstractmethod
def execute(self):
pass
| 22.416667 | 54 | 0.743494 | 29 | 269 | 6.655172 | 0.517241 | 0.202073 | 0.176166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197026 | 269 | 11 | 55 | 24.454545 | 0.893519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.125 | 0.25 | 0 | 0.625 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
72b15f81c8b9afe16649daf9402a10d6c03eae76 | 2,214 | py | Python | tests/test_dynamic_rerun_disabled_option.py | gnikonorov/pytest-dynamicrerun | 200f45830be5de3d088d092aaac2b11626c04668 | [
"MIT"
] | null | null | null | tests/test_dynamic_rerun_disabled_option.py | gnikonorov/pytest-dynamicrerun | 200f45830be5de3d088d092aaac2b11626c04668 | [
"MIT"
] | 1 | 2020-08-10T00:58:07.000Z | 2020-08-10T03:47:55.000Z | tests/test_dynamic_rerun_disabled_option.py | gnikonorov/pytest-dynamicrerun | 200f45830be5de3d088d092aaac2b11626c04668 | [
"MIT"
] | null | null | null | # This file contains tests specific to the dynamic_rerun_disabled option
import pytest
from helpers import _assert_result_outcomes
def test_dynamic_rerun_disabled_false_by_default(testdir):
dynamic_rerun_attempts = 3
testdir.makeini(
"""
[pytest]
dynamic_rerun_attempts = {}
dynamic_rerun_schedule = * * * * * *
""".format(
dynamic_rerun_attempts
)
)
testdir.makepyfile("def test_always_false(): assert False")
result = testdir.runpytest("-v")
assert result.ret == pytest.ExitCode.TESTS_FAILED
_assert_result_outcomes(result, dynamic_rerun=dynamic_rerun_attempts, failed=1)
@pytest.mark.parametrize(
"dynamic_rerun_disabled",
["TRUE", "True", "TrUe", "true", "y", "yes", "t", "true", "on", "1"],
)
def test_dynamic_rerun_disabled_works_for_true_values(testdir, dynamic_rerun_disabled):
testdir.makeini(
"""
[pytest]
dynamic_rerun_attempts = 3
dynamic_rerun_disabled = {}
dynamic_rerun_schedule = * * * * * *
""".format(
dynamic_rerun_disabled
)
)
testdir.makepyfile("def test_always_false(): assert False")
result = testdir.runpytest("-v")
assert result.ret == pytest.ExitCode.TESTS_FAILED
_assert_result_outcomes(result, dynamic_rerun=0, failed=1)
@pytest.mark.parametrize(
"dynamic_rerun_disabled",
[
"doit",
"ok",
"fine",
"n",
"NO",
"FaLsE",
"OFF",
"noway",
"",
"stopit",
"123",
"0",
],
)
def test_dynamic_rerun_disabled_works_for_false_values(testdir, dynamic_rerun_disabled):
dynamic_rerun_attempts = 3
testdir.makeini(
"""
[pytest]
dynamic_rerun_attempts = {}
dynamic_rerun_disabled = {}
dynamic_rerun_schedule = * * * * * *
""".format(
dynamic_rerun_attempts, dynamic_rerun_disabled
)
)
testdir.makepyfile("def test_always_false(): assert False")
result = testdir.runpytest("-v")
assert result.ret == pytest.ExitCode.TESTS_FAILED
_assert_result_outcomes(result, dynamic_rerun=dynamic_rerun_attempts, failed=1)
| 25.159091 | 88 | 0.631436 | 232 | 2,214 | 5.655172 | 0.24569 | 0.246951 | 0.182927 | 0.043445 | 0.84375 | 0.772866 | 0.742378 | 0.657012 | 0.589177 | 0.519055 | 0 | 0.007255 | 0.252936 | 2,214 | 87 | 89 | 25.448276 | 0.785973 | 0.031617 | 0 | 0.418182 | 0 | 0 | 0.129014 | 0.025229 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.054545 | false | 0 | 0.036364 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
72cafeef823f6f538d3416ee440ce47f37267f54 | 191 | py | Python | backend/restapi/models/__init__.py | IlluminateMedia/prompts-ai | 04723f21f165811ab784f8eebab10b7df2d02075 | [
"MIT"
] | null | null | null | backend/restapi/models/__init__.py | IlluminateMedia/prompts-ai | 04723f21f165811ab784f8eebab10b7df2d02075 | [
"MIT"
] | null | null | null | backend/restapi/models/__init__.py | IlluminateMedia/prompts-ai | 04723f21f165811ab784f8eebab10b7df2d02075 | [
"MIT"
] | null | null | null | from .custom_model import CustomModel
from .shared_prompt import SharedPrompt
from .workspace import Workspace
from .airtable import Airtable
from .airtable_workspace import AirtableWorkspace | 38.2 | 49 | 0.874346 | 23 | 191 | 7.130435 | 0.478261 | 0.182927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099476 | 191 | 5 | 49 | 38.2 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
72e07b62b5e8e09cb90d3bbe3b808367cc8a6dd2 | 2,575 | py | Python | project/editorial/migrations/0091_auto_20190512_1710.py | ProjectFacet/facet | dc6bc79d450f7e2bdf59cfbcd306d05a736e4db9 | [
"MIT"
] | 25 | 2015-07-13T22:16:36.000Z | 2021-11-11T02:45:32.000Z | project/editorial/migrations/0091_auto_20190512_1710.py | ProjectFacet/facet | dc6bc79d450f7e2bdf59cfbcd306d05a736e4db9 | [
"MIT"
] | 74 | 2015-12-01T18:57:47.000Z | 2022-03-11T23:25:47.000Z | project/editorial/migrations/0091_auto_20190512_1710.py | ProjectFacet/facet | dc6bc79d450f7e2bdf59cfbcd306d05a736e4db9 | [
"MIT"
] | 6 | 2016-01-08T21:12:43.000Z | 2019-05-20T16:07:56.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.9 on 2019-05-13 00:10
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('editorial', '0090_auto_20190512_1558'),
]
operations = [
migrations.RemoveField(
model_name='organizationdiscoveryprofile',
name='platforms',
),
migrations.AddField(
model_name='organizationdiscoveryprofile',
name='platform_cable_tv',
field=models.BooleanField(default=False, help_text=b'Organization airs on cable television.'),
),
migrations.AddField(
model_name='organizationdiscoveryprofile',
name='platform_network_tv',
field=models.BooleanField(default=False, help_text=b'Organization airs on network television.'),
),
migrations.AddField(
model_name='organizationdiscoveryprofile',
name='platform_newsletter',
field=models.BooleanField(default=False, help_text=b'Organization publishes newsletters.'),
),
migrations.AddField(
model_name='organizationdiscoveryprofile',
name='platform_online',
field=models.BooleanField(default=False, help_text=b'Organization publishes online.'),
),
migrations.AddField(
model_name='organizationdiscoveryprofile',
name='platform_podcast',
field=models.BooleanField(default=False, help_text=b'Organization produces podcasts.'),
),
migrations.AddField(
model_name='organizationdiscoveryprofile',
name='platform_print',
field=models.BooleanField(default=False, help_text=b'Organization publishes in print.'),
),
migrations.AddField(
model_name='organizationdiscoveryprofile',
name='platform_radio',
field=models.BooleanField(default=False, help_text=b'Organization airs on radio.'),
),
migrations.AddField(
model_name='organizationdiscoveryprofile',
name='platform_social',
field=models.BooleanField(default=False, help_text=b'Organization publishes content on social platforms.'),
),
migrations.AddField(
model_name='organizationdiscoveryprofile',
name='platform_streaming_video',
field=models.BooleanField(default=False, help_text=b'Organization content airs on streaming video.'),
),
]
| 39.615385 | 119 | 0.645437 | 230 | 2,575 | 7.056522 | 0.286957 | 0.055453 | 0.227973 | 0.252619 | 0.74122 | 0.74122 | 0.74122 | 0.534812 | 0.346272 | 0.277264 | 0 | 0.017223 | 0.255922 | 2,575 | 64 | 120 | 40.234375 | 0.829854 | 0.026408 | 0 | 0.508772 | 1 | 0 | 0.320687 | 0.130591 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035088 | 0 | 0.087719 | 0.035088 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
72ecfa820dcbb2cb3632b9f36b274f5326a60330 | 30,736 | py | Python | seaice/data/test/test_gridset_filters.py | andypbarrett/nsidc-seaice | 167a16309f7eaadd5c613b54a7df26eb1f48c2f3 | [
"MIT"
] | 2 | 2020-08-27T08:40:22.000Z | 2021-04-14T15:42:09.000Z | seaice/data/test/test_gridset_filters.py | andypbarrett/nsidc-seaice | 167a16309f7eaadd5c613b54a7df26eb1f48c2f3 | [
"MIT"
] | null | null | null | seaice/data/test/test_gridset_filters.py | andypbarrett/nsidc-seaice | 167a16309f7eaadd5c613b54a7df26eb1f48c2f3 | [
"MIT"
] | null | null | null | import datetime as dt
import unittest
from unittest.mock import patch
import numpy as np
import numpy.testing as npt
import pandas as pd
import pandas.util.testing as pdt
import seaice.data.gridset_filters as gf
from seaice.data.gridset_filters import apply_largest_pole_hole
from seaice.data.gridset_filters import concentration_cutoff
from seaice.data.gridset_filters import drop_invalid_ice
from seaice.data.gridset_filters import drop_bad_dates
from seaice.data.gridset_filters import drop_land
from seaice.data.gridset_filters import prevent_empty
from seaice.data.gridset_filters import ensure_full_nrt_month
import seaice.data.errors as e
import seaice.nasateam as nt
LAND = nt.FLAGS['land']
COAST = nt.FLAGS['coast']
class Test_apply_largest_pole_hole(unittest.TestCase):
def test_no_pole_hole(self):
gridset = {'data': np.array([[1, 1],
[1, 1]]),
'metadata': {'flags': {'pole': 251},
'missing_value': 255}}
actual = apply_largest_pole_hole(gridset)
npt.assert_array_equal(gridset['data'], actual['data'])
def test_one_pole_hole(self):
layer1 = np.array([[251, 1],
[1, 1]])
layer2 = np.array([[2, 2],
[2, 2]])
gridset = {'data': np.dstack([layer1, layer2]),
'metadata': {'flags': {'pole': 251},
'missing_value': 255}}
actual = apply_largest_pole_hole(gridset)
expected_layer2 = np.array([[251, 2],
[2, 2]])
expected = np.dstack([layer1, expected_layer2])
npt.assert_array_equal(expected, actual['data'])
def test_different_pole_holes(self):
layer1 = np.array([[251, 1],
[1, 1]])
layer2 = np.array([[251, 251],
[2, 2]])
gridset = {'data': np.dstack([layer1, layer2]),
'metadata': {'flags': {'pole': 251},
'missing_value': 255}}
actual = apply_largest_pole_hole(gridset)
expected_layer1 = np.array([[251, 251],
[1, 1]])
expected_layer2 = np.array([[251, 251],
[2, 2]])
expected = np.dstack([expected_layer1, expected_layer2])
npt.assert_array_equal(expected, actual['data'])
def test_with_layer_of_all_missing(self):
layer1 = np.array([[251, 1],
[1, 1]])
layer2 = np.array([[251, 251],
[2, 2]])
layer3 = np.array([[255, 255],
[255, 255]])
gridset = {'data': np.dstack([layer1, layer2, layer3]),
'metadata': {'flags': {'pole': 251},
'missing_value': 255}}
actual = apply_largest_pole_hole(gridset)
expected_layer1 = np.array([[251, 251],
[1, 1]])
expected_layer2 = np.array([[251, 251],
[2, 2]])
expected_layer3 = np.array([[255, 255],
[255, 255]])
expected = np.dstack([expected_layer1, expected_layer2, expected_layer3])
npt.assert_array_equal(expected, actual['data'])
class Test_concentration_cutoff(unittest.TestCase):
@patch('seaice.data.grid_filters.concentration_cutoff')
def test_calls_grid_filters_concentration_cutoff(self, mock_concentration_cutoff):
gridset = {
'data': np.array([[20, 10],
[5, 50]])
}
mock_concentration_cutoff.return_value = np.array([[20, 0],
[0, 50]])
expected = {
'data': np.array([[20, 0],
[0, 50]])
}
actual = concentration_cutoff(15, gridset)
npt.assert_array_equal(expected['data'], actual['data'])
self.assertEqual(15, mock_concentration_cutoff.call_args[0][0])
npt.assert_array_equal(np.array([[20, 10],
[5, 50]]), mock_concentration_cutoff.call_args[0][1])
class Test_drop_bad_dates(unittest.TestCase):
@patch('seaice.datastore.get_bad_days_for_hemisphere')
def test_no_bad_data(self,
mock_get_bad_days_for_hemisphere):
gridset = {
'data': np.full((5, 5, 3), 10, dtype=np.int),
'metadata': {
'hemi': 'N',
'temporality': 'D',
'files': ['1.bin', '2.bin', '3.bin'],
'period_index': pd.period_range('2016-01-01', '2016-01-03', freq='D')
}
}
bad_dates_index = pd.PeriodIndex([], freq='D')
mock_get_bad_days_for_hemisphere.return_value = bad_dates_index
actual = drop_bad_dates(gridset)
self.assertEqual(actual['metadata']['files'],
['1.bin', '2.bin', '3.bin'])
pdt.assert_index_equal(actual['metadata']['period_index'],
pd.period_range('2016-01-01', '2016-01-03', freq='D'))
npt.assert_array_equal(actual['data'],
np.full((5, 5, 3), 10, dtype=np.int))
@patch('seaice.datastore.get_bad_days_for_hemisphere')
def test_all_bad_data(self,
mock_get_bad_days_for_hemisphere):
the_period_index = pd.period_range('2016-01-01', '2016-01-03', freq='D')
gridset = {
'data': np.full((5, 5, 3), 10, dtype=np.int),
'metadata': {
'hemi': 'N',
'temporality': 'D',
'files': ['1.bin', '2.bin', '3.bin'],
'period_index': the_period_index,
'period': pd.Period(dt.date(2016, 2, 2), freq='D'),
'missing_value': 255
}
}
bad_dates_index = the_period_index.copy()
mock_get_bad_days_for_hemisphere.return_value = bad_dates_index
actual = drop_bad_dates(gridset)
self.assertEqual(actual['metadata']['files'],
[])
pdt.assert_index_equal(actual['metadata']['period_index'],
pd.PeriodIndex([], dtype='period[D]'))
npt.assert_array_equal(actual['data'], np.full((5, 5), 255, dtype=np.int))
@patch('seaice.datastore.get_bad_days_for_hemisphere')
def test_middle_day_bad(self,
mock_get_bad_days_for_hemisphere):
zeroth_grid = np.full((5, 5), 0, dtype=np.int)
first_grid = np.full((5, 5), 1, dtype=np.int)
second_grid = np.full((5, 5), 2, dtype=np.int)
gridset = {
'data': np.dstack([zeroth_grid, first_grid, second_grid]),
'metadata': {
'hemi': 'N',
'temporality': 'D',
'files': ['1.bin', '2.bin', '3.bin'],
'period_index': pd.period_range('2016-01-01', '2016-01-03', freq='D')
}
}
bad_dates_index = pd.PeriodIndex(['2016-01-02'], freq='D')
mock_get_bad_days_for_hemisphere.return_value = bad_dates_index
actual = drop_bad_dates(gridset)
self.assertEqual(actual['metadata']['files'],
['1.bin', '3.bin'])
pdt.assert_index_equal(actual['metadata']['period_index'],
pd.PeriodIndex(['2016-01-01', '2016-01-03'], freq='D'))
npt.assert_array_equal(actual['data'], np.dstack([zeroth_grid, second_grid]))
@patch('seaice.datastore.get_bad_days_for_hemisphere')
def test_two_days_bad(self,
mock_get_bad_days_for_hemisphere):
zeroth_grid = np.full((5, 5), 0, dtype=np.int)
first_grid = np.full((5, 5), 1, dtype=np.int)
second_grid = np.full((5, 5), 2, dtype=np.int)
gridset = {
'data': np.dstack([zeroth_grid, first_grid, second_grid]),
'metadata': {
'hemi': 'N',
'temporality': 'D',
'files': ['1.bin', '2.bin', '3.bin'],
'period_index': pd.period_range('2016-01-01', '2016-01-03', freq='D')
}
}
bad_dates_index = pd.PeriodIndex(['2016-01-02', '2016-01-03'], freq='D')
mock_get_bad_days_for_hemisphere.return_value = bad_dates_index
actual = drop_bad_dates(gridset)
self.assertEqual(actual['metadata']['files'],
['1.bin'])
pdt.assert_index_equal(actual['metadata']['period_index'],
pd.PeriodIndex(['2016-01-01'], freq='D'))
npt.assert_array_equal(actual['data'], zeroth_grid)
@patch('seaice.datastore.get_bad_days_for_hemisphere')
def test_no_bad_data_double_weighted_dates(self,
mock_get_bad_days_for_hemisphere):
gridset = {
'data': np.full((5, 5, 3), 10, dtype=np.int),
'metadata': {
'hemi': 'N',
'temporality': 'D',
'files': ['1.bin', '1.bin', '2.bin'],
'period_index': pd.PeriodIndex(['2016-01-01', '2016-01-01', '2016-01-02'], freq='D')
}
}
bad_dates_index = pd.PeriodIndex([], freq='D')
mock_get_bad_days_for_hemisphere.return_value = bad_dates_index
actual = drop_bad_dates(gridset)
self.assertEqual(actual['metadata']['files'],
['1.bin', '1.bin', '2.bin'])
pdt.assert_index_equal(actual['metadata']['period_index'],
pd.PeriodIndex(['2016-01-01', '2016-01-01', '2016-01-02'], freq='D'))
npt.assert_array_equal(actual['data'],
np.full((5, 5, 3), 10, dtype=np.int))
@patch('seaice.datastore.get_bad_days_for_hemisphere')
def test_all_bad_data_double_weighted_dates(self,
mock_get_bad_days_for_hemisphere):
the_period_index = pd.PeriodIndex(['2016-01-01', '2016-01-01', '2016-01-02'], freq='D')
gridset = {
'data': np.full((5, 5, 3), 10, dtype=np.int),
'metadata': {
'hemi': 'N',
'temporality': 'D',
'files': ['1.bin', '1.bin', '2.bin'],
'period_index': the_period_index,
'period': pd.Period(dt.date(2016, 2, 2), freq='D'),
'missing_value': 255
}
}
bad_dates_index = the_period_index.copy()
mock_get_bad_days_for_hemisphere.return_value = bad_dates_index
actual = drop_bad_dates(gridset)
self.assertEqual(actual['metadata']['files'],
[])
pdt.assert_index_equal(actual['metadata']['period_index'],
pd.PeriodIndex([], dtype='period[D]'))
npt.assert_array_equal(actual['data'], np.full((5, 5), 255, dtype=np.int))
@patch('seaice.datastore.get_bad_days_for_hemisphere')
def test_day_bad_double_weighted_dates(self,
mock_get_bad_days_for_hemisphere):
zeroth_grid = np.full((5, 5), 0, dtype=np.int)
first_grid = np.full((5, 5), 1, dtype=np.int)
second_grid = np.full((5, 5), 2, dtype=np.int)
gridset = {
'data': np.dstack([zeroth_grid, zeroth_grid, first_grid, second_grid]),
'metadata': {
'hemi': 'N',
'temporality': 'D',
'files': ['1.bin', '1.bin', '2.bin', '3.bin'],
'period_index': pd.PeriodIndex(['2016-01-01',
'2016-01-01',
'2016-01-02',
'2016-01-03'], freq='D')
}
}
bad_dates_index = pd.PeriodIndex(['2016-01-02'], freq='D')
mock_get_bad_days_for_hemisphere.return_value = bad_dates_index
actual = drop_bad_dates(gridset)
self.assertEqual(actual['metadata']['files'],
['1.bin', '1.bin', '3.bin'])
pdt.assert_index_equal(actual['metadata']['period_index'],
pd.PeriodIndex(['2016-01-01', '2016-01-01', '2016-01-03'], freq='D'))
npt.assert_array_equal(actual['data'], np.dstack([zeroth_grid, zeroth_grid, second_grid]))
@patch('seaice.datastore.get_bad_days_for_hemisphere')
def test_double_weighted_day_bad_double_weighted_dates(self,
mock_get_bad_days_for_hemisphere):
zeroth_grid = np.full((5, 5), 0, dtype=np.int)
first_grid = np.full((5, 5), 1, dtype=np.int)
second_grid = np.full((5, 5), 2, dtype=np.int)
gridset = {
'data': np.dstack([zeroth_grid, zeroth_grid, first_grid, second_grid]),
'metadata': {
'hemi': 'N',
'temporality': 'D',
'files': ['1.bin', '1.bin', '2.bin', '3.bin'],
'period_index': pd.PeriodIndex(['2016-01-01',
'2016-01-01',
'2016-01-02',
'2016-01-03'], freq='D')
}
}
bad_dates_index = pd.PeriodIndex(['2016-01-01', '2016-01-02'], freq='D')
mock_get_bad_days_for_hemisphere.return_value = bad_dates_index
actual = drop_bad_dates(gridset)
self.assertEqual(actual['metadata']['files'],
['3.bin'])
pdt.assert_index_equal(actual['metadata']['period_index'],
pd.PeriodIndex(['2016-01-03'], freq='D'))
npt.assert_array_equal(actual['data'], second_grid)
class Test_drop_invalid_ice(unittest.TestCase):
def test_preserves_flag_and_removes_invalid_ice(self):
invalid_ice_mask = np.array([[False, False],
[False, True]])
gridset = {
'data': np.array([[100, 251],
[100, 100]]),
'metadata': {'valid_data_range': (0, 100), 'missing_value': 255}
}
actual = drop_invalid_ice(invalid_ice_mask, gridset)
expected_data = np.array([[100, 251],
[100, 0]])
self.assertTrue(actual['metadata']['drop_invalid_ice'])
npt.assert_array_equal(actual['data'], expected_data)
def test_preserves_flag_where_invalid_mask_is_present(self):
invalid_ice_mask = np.array([[False, True],
[False, True]])
gridset = {
'data': np.array([[100, 251],
[100, 100]]),
'metadata': {'valid_data_range': (0, 100), 'missing_value': 255}
}
actual = drop_invalid_ice(invalid_ice_mask, gridset)
expected_data = np.array([[100, 251],
[100, 0]])
self.assertTrue(actual['metadata']['drop_invalid_ice'])
npt.assert_array_equal(actual['data'], expected_data)
def test_does_nothing_when_no_invalid_ice(self):
invalid_ice_mask = np.array([[False, False],
[False, False]])
gridset = {
'data': np.array([[100, 251],
[100, 100]]),
'metadata': {'valid_data_range': (0, 100), 'missing_value': 255}
}
actual = drop_invalid_ice(invalid_ice_mask, gridset)
expected_data = np.array([[100, 251],
[100, 100]])
self.assertTrue(actual['metadata']['drop_invalid_ice'])
npt.assert_array_equal(actual['data'], expected_data)
def test_removes_missing_in_invalid_regions_and_leaves_it_in_valid_ones(self):
invalid_ice_mask = np.array([[True, False],
[True, False],
[True, False]])
gridset = {
'data': np.array([[100, 251],
[100, 100],
[255, 255]]),
'metadata': {'valid_data_range': (0, 100),
'missing_value': 255}
}
actual = drop_invalid_ice(invalid_ice_mask, gridset)
expected_data = np.array([[0, 251],
[0, 100],
[0, 255]])
self.assertTrue(actual['metadata']['drop_invalid_ice'])
npt.assert_array_equal(expected_data, actual['data'])
def test_leaves_all_missing_alone(self):
invalid_ice_mask = np.array([[True, False],
[True, False],
[True, False]])
gridset = {
'data': np.array([[255, 255],
[255, 255],
[255, 255]]),
'metadata': {'valid_data_range': (0, 100),
'missing_value': 255}
}
actual = drop_invalid_ice(invalid_ice_mask, gridset)
expected_data = np.array([[255, 255],
[255, 255],
[255, 255]])
self.assertNotIn('drop_invalid_ice', actual['metadata'])
npt.assert_array_equal(expected_data, actual['data'])
class Test_drop_land(unittest.TestCase):
def test_drops_land_values(self):
gridset = {
'data': np.array([[100, LAND],
[100, 100]]),
'metadata': {}
}
actual = drop_land(LAND, COAST, gridset)
expected_data = np.array([[100, 0],
[100, 100]])
self.assertTrue(actual['metadata']['drop_land'])
npt.assert_array_equal(actual['data'], expected_data)
def test_drops_coast_values(self):
gridset = {
'data': np.array([[100, 100],
[100, COAST]]),
'metadata': {}
}
actual = drop_land(LAND, COAST, gridset)
expected_data = np.array([[100, 100],
[100, 0]])
self.assertTrue(actual['metadata']['drop_land'])
npt.assert_array_equal(actual['data'], expected_data)
def test_drops_land_and_coast_values(self):
gridset = {
'data': np.array([[100, LAND],
[100, COAST]]),
'metadata': {}
}
actual = drop_land(LAND, COAST, gridset)
expected_data = np.array([[100, 0],
[100, 0]])
self.assertTrue(actual['metadata']['drop_land'])
npt.assert_array_equal(actual['data'], expected_data)
def test_does_nothing_when_no_land(self):
gridset = {
'data': np.array([[100, 76],
[100, 100]]),
'metadata': {}
}
actual = drop_land(LAND, COAST, gridset)
expected_data = np.array([[100, 76],
[100, 100]])
self.assertTrue(actual['metadata']['drop_land'])
npt.assert_array_equal(actual['data'], expected_data)
class Test_ensure_full_nrt_monthly(unittest.TestCase):
def setUp(self):
self.gridset = {
'data': 'NA for this test',
'metadata': {
'files': [None] * 31,
'period_index': pd.period_range('1/1/2001', '1/31/2001', freq='D'),
'period': pd.Period('2001-01', freq='M'),
'temporality': 'M'
}
}
def test_returns_exception_when_period_index_is_daily_and_nrt_filelist_is_incomplete(self):
self.gridset['metadata']['files'] = [None] * 20
with self.assertRaises(e.IncompleteNRTGridsetError):
ensure_full_nrt_month(self.gridset)
def test_returns_gridset_when_period_index_is_daily_and_nrt_filelist_is_complete(self):
actual = ensure_full_nrt_month(self.gridset)
expected = self.gridset
self.assertEqual(actual, expected)
def test_returns_gridset_when_temporality_is_incorrect(self):
self.gridset['metadata']['temporality'] = 'D'
expected = self.gridset
actual = ensure_full_nrt_month(self.gridset)
self.assertEqual(actual, expected)
def test_returns_gridset_when_period_is_monthly(self):
self.gridset['metadata']['files'] = [None]
self.gridset['metadata']['period_index'] = pd.period_range('1/1/2001', '1/1/2001', freq='M')
self.gridset['metadata']['period'] = pd.Period('2001-01', freq='M')
expected = self.gridset
actual = ensure_full_nrt_month(self.gridset)
self.assertEqual(actual, expected)
class Test_prevent_empty(unittest.TestCase):
def test_returns_same_nonempty_gridset(self):
gridset = {
'data': np.array([[100, 76],
[255, 100]]),
'metadata': {'missing_value': 255}
}
actual = prevent_empty(gridset)
expected_data = np.array([[100, 76],
[255, 100]])
self.assertEqual(actual['metadata'], {'missing_value': 255})
npt.assert_array_equal(actual['data'], expected_data)
def test_raises_error_with_all_missing_gridset(self):
gridset = {
'data': np.array([[255, 255],
[255, 255]]),
'metadata': {'missing_value': 255}
}
with self.assertRaises(e.SeaIceDataNoData):
prevent_empty(gridset)
class Test__interpolate_missing(unittest.TestCase):
def test_when_no_missing_data(self):
data_grid = np.ma.array([[50., 50.],
[100., 100.]])
zeros = np.zeros_like(data_grid)
interpolation_grids = np.expand_dims(zeros, axis=2)
# expected is just the data grid when there's no missing data
expected = data_grid
actual = gf._interpolate_missing(data_grid, interpolation_grids)
npt.assert_array_equal(expected.data, actual.data)
def test_when_data_is_masked(self):
missing = 255
data_grid = np.ma.array([[50., missing],
[100., 100.]])
zeros = np.zeros_like(data_grid)
interpolation_grids = np.expand_dims(zeros, axis=2)
data_grid = np.ma.masked_equal(data_grid, missing)
expected = np.ma.array([[50., 0],
[100., 100.]])
actual = gf._interpolate_missing(data_grid, interpolation_grids, missing_value=missing)
npt.assert_array_equal(expected, actual)
npt.assert_array_equal(expected.data, actual.data)
def test_when_target_grid_is_all_missing(self):
data_grid = np.ma.array([[50., 50.],
[100., 100.]])
zeros = np.zeros_like(data_grid)
interpolation_grids = np.dstack([data_grid, zeros])
target_grid = np.full(data_grid.shape, 255, dtype=np.int)
expected = np.ma.array([[25., 25.],
[50., 50.]])
actual = gf._interpolate_missing(target_grid, interpolation_grids)
npt.assert_array_equal(expected.data, actual.data)
def test_masked_flagged_values_unchanged_and_masked_no_missing(self):
data_grid = np.array([[50., 251.],
[100., 253.]])
interpolation_grids = np.expand_dims(data_grid.copy(), axis=2)
expected = np.array([[50., 251.],
[100., 253.]])
actual = gf._interpolate_missing(data_grid, interpolation_grids)
npt.assert_array_equal(expected, actual)
npt.assert_array_equal(expected.data, actual.data)
def test_flagged_values_unchanged_with_missing(self):
data_grid = np.ma.array([[50., 251.],
[255., 253.]])
data_grid2 = np.ma.array([[50., 251.],
[75., 253.]])
interpolation_grids = np.expand_dims(data_grid2, axis=2)
expected = np.ma.array([[50., 251.],
[75., 253.]])
actual = gf._interpolate_missing(data_grid, interpolation_grids)
npt.assert_array_equal(expected, actual)
def test_flagged_values_are_minimum_anded_with_missing(self):
"""Test pole hole is replaced by data if it shrinks or grows"""
data_grid = np.ma.array([[50., 251.],
[255., 253.]])
data_grid2 = np.ma.array([[251., 251.], # extra pole in this data
[75., 253.]])
interpolation_grids = np.dstack([data_grid, data_grid2])
target_grid = np.full(data_grid.shape, 255, dtype=np.int)
expected = np.ma.array([[50., 251.], # extra pole is replaced
[75., 253.]]) # with data from data_grid
actual = gf._interpolate_missing(target_grid, interpolation_grids)
npt.assert_array_equal(expected, actual)
def test_flagged_values_and_missing_mixed_together_return_missing(self):
data_grid = np.ma.array([[255., 251.],
[255., 253.]])
data_grid2 = np.ma.array([[251., 251.], # extra pole in this data
[75., 253.]])
target_grid = np.full(data_grid.shape, 255, dtype=np.int)
interpolation_grids = np.dstack([data_grid, data_grid2])
expected = np.ma.array([[255., 251.],
[75., 253.]])
actual = gf._interpolate_missing(target_grid, interpolation_grids)
npt.assert_array_equal(expected, actual)
npt.assert_array_equal(expected.data, actual.data)
def test_flagged_values_and_missing_and_data_mixed_together_return_data(self):
data_grid = np.ma.array([[255., 251.],
[255., 253.]])
data_grid2 = np.ma.array([[251., 251.], # extra pole in this data
[75., 253.]])
data_grid3 = np.ma.array([[50., 251.],
[255., 253.]])
interpolation_grids = np.dstack([data_grid, data_grid3])
target_grid = data_grid2
expected = np.ma.array([[50., 251.],
[75., 253.]])
actual = gf._interpolate_missing(target_grid, interpolation_grids)
npt.assert_array_equal(expected, actual)
def test_with_missing_values(self):
data_grid = np.array([[55., 0.],
[55., 53.]])
data_grid2 = np.array([[60., 255.],
[75., 23.]])
data_grid3 = np.array([[50., 100.],
[55., 53.]])
interpolation_grids = np.dstack([data_grid, data_grid3])
target_grid = data_grid2
actual = gf._interpolate_missing(target_grid, interpolation_grids)
expected_data = np.array([[60., 50.],
[75., 23.]])
npt.assert_array_equal(expected_data, actual)
class Test__index_by_date(unittest.TestCase):
def test__index_by_date(self):
filelist = ['nt_20120918_f17_v1.1_s.bin',
'nt_20120919_f13_v1.1_s.bin',
'nt_20120920_f17_v1.1_s.bin']
date_ = dt.date(2012, 9, 19)
expected = 1
actual = gf._index_by_date(filelist, date_)
self.assertEquals(expected, actual)
def test__index_by_date_with_no_matches(self):
filelist = ['nt_20120918_f17_v1.1_s.bin',
'nt_20120919_f13_v1.1_s.bin',
'nt_20120920_f17_v1.1_s.bin']
date = dt.date(2014, 9, 19)
with self.assertRaises(e.IndexNotFoundError):
gf._index_by_date(filelist, date)
class Test__extent_grid_from_conc_grid(unittest.TestCase):
def test_base_case(self):
conc = np.array([[100, 100],
[100, 100]])
actual = gf._extent_grid_from_conc_grid(conc)
expected = np.array([[1, 1],
[1, 1]])
npt.assert_array_equal(actual, expected)
def test_default_valid_range(self):
conc = np.array([[15, 100],
[14, 100]])
actual = gf._extent_grid_from_conc_grid(conc)
expected = np.array([[1, 1],
[0, 1]])
npt.assert_array_equal(actual, expected)
def test_only_counts_conc_within_range(self):
conc = np.array([[50, 51],
[27, 26]])
actual = gf._extent_grid_from_conc_grid(conc, valid_extent_range=(27, 50))
expected = np.array([[1, 0],
[1, 0]])
npt.assert_array_equal(actual, expected)
def test_preserves_flag_values(self):
conc = np.array([[100, 1977],
[100, 100]])
flags = {
'starwars': 1977
}
actual = gf._extent_grid_from_conc_grid(conc, flags=flags)
expected = np.array([[1, 1977],
[1, 1]])
npt.assert_array_equal(actual, expected)
def test_counts_pole_flag_as_extent(self):
conc = np.array([[100, 1977],
[100, 100]])
flags = {
'pole': 1977
}
actual = gf._extent_grid_from_conc_grid(conc, flags=flags)
expected = np.array([[1, 1],
[1, 1]])
npt.assert_array_equal(actual, expected)
def test_all_the_options(self):
conc = np.array([[13, 75, 100],
[12, 1977, 100],
[2389, 50, 100]])
flags = {
'pole': 1977,
'special': 2389
}
actual = gf._extent_grid_from_conc_grid(conc,
valid_extent_range=(13, 99),
flags=flags)
expected = np.array([[1, 1, 0],
[0, 1, 0],
[2389, 1, 0]])
npt.assert_array_equal(actual, expected)
def test_empty_gridset(self):
conc = np.array([[255, 255],
[255, 255]])
flags = {
'missing': 255
}
actual = gf._extent_grid_from_conc_grid(conc, flags=flags)
expected = np.array([[255, 255],
[255, 255]])
npt.assert_array_equal(actual, expected)
| 35.864644 | 100 | 0.524564 | 3,456 | 30,736 | 4.390625 | 0.069155 | 0.029524 | 0.039673 | 0.053842 | 0.821273 | 0.801107 | 0.760248 | 0.71583 | 0.68881 | 0.660076 | 0 | 0.073738 | 0.340806 | 30,736 | 856 | 101 | 35.906542 | 0.675189 | 0.007743 | 0 | 0.624406 | 0 | 0 | 0.093384 | 0.018139 | 0 | 0 | 0 | 0 | 0.122029 | 1 | 0.074485 | false | 0 | 0.026941 | 0 | 0.117274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f400e606600c55f35fa1e8fcce3a914f8a7314bd | 3,958 | py | Python | backend/api/migrations/0001_initial.py | projectpai/paipass | 8b8e70b6808bf026cf957e240c7eed7bfcf4c55d | [
"MIT"
] | 3 | 2021-04-17T10:20:26.000Z | 2022-03-08T07:36:13.000Z | backend/api/migrations/0001_initial.py | projectpai/paipass | 8b8e70b6808bf026cf957e240c7eed7bfcf4c55d | [
"MIT"
] | null | null | null | backend/api/migrations/0001_initial.py | projectpai/paipass | 8b8e70b6808bf026cf957e240c7eed7bfcf4c55d | [
"MIT"
] | null | null | null | # Generated by Django 3.0.1 on 2020-05-21 10:15
import api.models
from django.db import migrations, models
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='EmailVerificationSession',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('email', models.EmailField(max_length=254, unique=True, verbose_name='email address')),
('status', models.CharField(choices=[('Accepted', 'ACCEPTED'), ('Pending', 'PENDING'), ('Cancelled', 'CANCELLED')], default='Cancelled', max_length=32, verbose_name='status')),
('ip_address', models.CharField(max_length=64, verbose_name='ip address')),
('created_on', models.DateTimeField(auto_now_add=True)),
('verified_on', models.DateTimeField(default=api.models.long_time_from_now)),
('verification_code', models.CharField(max_length=128, verbose_name='Verification Code')),
],
),
migrations.CreateModel(
name='IdentityVerificationSession',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
migrations.CreateModel(
name='PhoneVerificationSession',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
('phone_number', models.CharField(max_length=31, verbose_name='phone number')),
('status', models.CharField(choices=[('Accepted', 'ACCEPTED'), ('Pending', 'PENDING'), ('Cancelled', 'CANCELLED')], default='Pending', max_length=32, verbose_name='status')),
('ip_address', models.CharField(max_length=64, verbose_name='ip address')),
('created_on', models.DateTimeField(auto_now_add=True)),
('verified_on', models.DateTimeField(default=api.models.long_time_from_now)),
('verification_code', models.CharField(max_length=128, verbose_name='Verification Code')),
],
),
migrations.CreateModel(
name='ResetPasswordSession',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('status', models.CharField(choices=[('ACCEPTED', 'Accepted'), ('PENDING', 'Pending'), ('CANCELLED', 'Cancelled')], default='PENDING', max_length=32, verbose_name='status')),
('ip_address', models.CharField(max_length=64, verbose_name='ip address')),
('created_on', models.DateTimeField(auto_now_add=True)),
('verification_code', models.CharField(max_length=128, verbose_name='Verification Code')),
],
),
migrations.CreateModel(
name='SecondFactorAuthSession',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('encoded_code', models.CharField(max_length=128, verbose_name='encoded code')),
('created_on', models.DateTimeField(auto_now_add=True)),
('verified_on', models.DateTimeField()),
('exchanged_on', models.DateTimeField()),
],
),
migrations.CreateModel(
name='UnsubscribedEmail',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_on', models.DateTimeField(auto_now_add=True)),
('nonce', models.CharField(max_length=256, verbose_name='nonce')),
('hash', models.CharField(max_length=128, verbose_name='hash')),
],
),
]
| 52.078947 | 192 | 0.604851 | 385 | 3,958 | 6.023377 | 0.207792 | 0.085382 | 0.07762 | 0.103493 | 0.738249 | 0.738249 | 0.738249 | 0.721863 | 0.68564 | 0.68564 | 0 | 0.017479 | 0.248358 | 3,958 | 75 | 193 | 52.773333 | 0.762017 | 0.011369 | 0 | 0.632353 | 1 | 0 | 0.178727 | 0.025058 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.014706 | 0.044118 | 0 | 0.102941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f45e0c8b107ef72334958d4edb367f310df3f904 | 38 | py | Python | main/__init__.py | CrewSY/KovalAgent | 74a5848e5ef5baf52924a739cd33d1196a958205 | [
"MIT"
] | null | null | null | main/__init__.py | CrewSY/KovalAgent | 74a5848e5ef5baf52924a739cd33d1196a958205 | [
"MIT"
] | 13 | 2018-03-03T17:23:13.000Z | 2018-03-05T16:20:40.000Z | main/__init__.py | CrewSY/KovalAgent | 74a5848e5ef5baf52924a739cd33d1196a958205 | [
"MIT"
] | null | null | null | """Main app of KovalAgent project."""
| 19 | 37 | 0.684211 | 5 | 38 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.787879 | 0.815789 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f4834442cb847dad5458c418789ee7af84fe15b6 | 12,180 | py | Python | tests/test_cond.py | kadeng/tensorflow-onnx | db91f5b25cc2a053f46af3b2c04b65a679cff03b | [
"MIT"
] | null | null | null | tests/test_cond.py | kadeng/tensorflow-onnx | db91f5b25cc2a053f46af3b2c04b65a679cff03b | [
"MIT"
] | null | null | null | tests/test_cond.py | kadeng/tensorflow-onnx | db91f5b25cc2a053f46af3b2c04b65a679cff03b | [
"MIT"
] | null | null | null | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT license.
"""Unit Tests for tf.cond and tf.case."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import unittest
import numpy as np
import tensorflow as tf
from backend_test_base import Tf2OnnxBackendTestBase
# pylint: disable=missing-docstring,invalid-name,unused-argument,using-constant-test
# pylint: disable=abstract-method,arguments-differ
class CondTests(Tf2OnnxBackendTestBase):
def test_simple_cond(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
x = x + 1
y = y + 1
res = tf.cond(x[0] < y[0], lambda: x+y, lambda: x-y, name="test_cond")
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
@unittest.skip("known issue about onnxruntime that initilizer is subgraph input")
def test_cond_with_const_branch(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
true_const = tf.constant(True, name="true_const", dtype=tf.bool)
def cond_graph():
with tf.name_scope("cond_graph", "cond_graph", [x, y]):
b = tf.constant(np.array([2, 1, 3], dtype=np.float32), name="b", dtype=tf.float32)
return b
res = tf.cond(true_const, lambda: x+y, cond_graph, name="test_cond")
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
@unittest.skip("a very special case that true and false branch of tf.cond only \
contain a const node, which depends on Switch per control inputs")
def test_cond_with_only_const(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
def cond_graph():
with tf.name_scope("cond_graph", "cond_graph", [x, y]):
b = tf.constant(10, name="b", dtype=tf.float32)
return b
res = tf.cond(x[0] < y[0], cond_graph, cond_graph, name="test_cond")
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
def test_cond_with_multi_merge(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
x = x + 1
y = y + 1
res = tf.cond(x[0] < y[0], lambda: [x, x+y], lambda: [x, x-y], name="test")
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
def test_cond_with_replicate_output(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
x = x + 1
y = y + 1
res = tf.cond(x[0] < y[0], lambda: [x, x], lambda: [y, y], name="test_cond")
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
def test_nest_cond(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
x = x + 1
y = y + 1
def cond_graph():
def cond_graph1():
def cond_graph2():
return tf.cond(x[0] < y[0], lambda: x + y, lambda: tf.square(y))
return tf.cond(tf.reduce_any(x < y), cond_graph2, cond_graph2)
return tf.cond(x[0] > y[0], cond_graph1, cond_graph1)
res = tf.cond(x[0] < y[0], cond_graph, cond_graph, name="test_cond")
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
@unittest.skip("not support for now")
def test_cond_with_while_loop(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
def cond_graph():
b = tf.constant(np.array([0], dtype=np.int32), dtype=tf.int32)
z = tf.gather_nd(x, b)
# while_loop
c = lambda y: tf.reduce_any(tf.less(y, 10))
b = lambda i: tf.add(y, 1)
r = tf.while_loop(c, b, [y])
return tf.cond(x[0] > y[0], lambda: z, lambda: r)
res = x[2] * tf.cond(x[0] < y[0], lambda: x, cond_graph, name="test_cond")
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
@unittest.skip("not support for now")
def test_cond_in_while_loop(self):
i = tf.placeholder(tf.int32, (), name="input_1")
inputs = tf.placeholder(tf.float32, (10,), name="input_2")
inputs_2 = tf.identity(inputs)
input_ta = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True).unstack(inputs_2)
output_ta = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True)
c = lambda i, *_: tf.logical_and(tf.less(i, 10), i >= 0)
def b(i, out_ta):
new_i = tf.add(i, 1)
x = input_ta.read(i)
x = tf.cond(x >= 0, lambda: x - 1, lambda: x + 3)
out_ta_new = out_ta.write(i, x)
return new_i, out_ta_new
i_final, out_final = tf.while_loop(c, b, [i, output_ta])
_ = tf.identity(i_final, name="i")
_ = tf.identity(out_final.stack(), name="output_ta")
input_names_with_port = ["input_1:0", "input_2:0"]
feed_dict = {"input_1:0": np.array(0, dtype=np.int32),
"input_2:0": np.array([2.0, 16.0, 5.0, 1.6, 5.0, 6.0, 7.0, 8.0, 9.0, 10.], dtype=np.float32)}
output_names_with_port = ["i:0", "output_ta:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port, rtol=1e-06)
def test_simple_case(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
x = x + 1
y = y + 1
res = tf.case([(tf.reduce_all(x < 1), lambda: x+y), (tf.reduce_all(y > 0), lambda: tf.square(y))],
default=lambda: x, name="test_case")
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
def test_case_with_exclusive(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
x = x + 1
y = y + 1
res = tf.case([(tf.reduce_all(x < 1), lambda: x+y), (tf.reduce_all(y > 0), lambda: tf.square(y))],
default=lambda: x, name="test_case", exclusive=True)
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
def test_case_without_default_branch(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
x = x + 1
y = y + 1
res = tf.case([(tf.reduce_all(x < 1), lambda: x+y), (tf.reduce_all(y > 0), lambda: tf.square(y))])
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
def test_case_with_multi_merge(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
x = x + 1
y = y + 1
res = tf.case(
[(tf.reduce_all(x < 1), lambda: [x+y, x-y]), (tf.reduce_all(y > 0), lambda: [tf.abs(x), tf.square(y)])],
default=lambda: [x, y], name="test_case"
)
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
def test_nest_case(self):
x_val = np.array([1, 2, 3], dtype=np.float32)
y_val = np.array([4, 5, 6], dtype=np.float32)
x = tf.placeholder(tf.float32, x_val.shape, name="input_1")
y = tf.placeholder(tf.float32, y_val.shape, name="input_2")
x = x + 1
y = y + 1
def case_graph():
return tf.case(
[(tf.reduce_all(x < 1), lambda: x+y), (tf.reduce_all(y > 0), lambda: tf.square(y))],
default=lambda: x - y,
name="test_case"
)
res = tf.case([(x[0] > 0, case_graph), (x[0] < 0, case_graph)], default=lambda: x - y)
_ = tf.identity(res, name="output")
feed_dict = {"input_1:0": x_val, "input_2:0": y_val}
input_names_with_port = ["input_1:0", "input_2:0"]
output_names_with_port = ["output:0"]
self.run_test_case(feed_dict, input_names_with_port, output_names_with_port)
if __name__ == '__main__':
Tf2OnnxBackendTestBase.trigger(CondTests)
| 46.136364 | 116 | 0.603941 | 1,960 | 12,180 | 3.489286 | 0.082653 | 0.068431 | 0.098845 | 0.068431 | 0.780377 | 0.76517 | 0.759322 | 0.758737 | 0.753034 | 0.74353 | 0 | 0.047774 | 0.242118 | 12,180 | 263 | 117 | 46.311787 | 0.693099 | 0.022085 | 0 | 0.643836 | 0 | 0 | 0.092177 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09589 | false | 0 | 0.031963 | 0.009132 | 0.16895 | 0.004566 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f48576ca06150b59dd8047a7b71b99ca57812f4e | 172 | py | Python | data/strategies/publishers/sage.py | jamesrharwood/journal-guidelines | fe6c0a6d3c0443df6fc816b9503fad24459ddb4a | [
"MIT"
] | null | null | null | data/strategies/publishers/sage.py | jamesrharwood/journal-guidelines | fe6c0a6d3c0443df6fc816b9503fad24459ddb4a | [
"MIT"
] | null | null | null | data/strategies/publishers/sage.py | jamesrharwood/journal-guidelines | fe6c0a6d3c0443df6fc816b9503fad24459ddb4a | [
"MIT"
] | null | null | null | url = "journals.sagepub.com/home/{ID}"
extractor_args = dict(restrict_text=[r"submission\s*guidelines"])
template = "https://journals.sagepub.com/author-instructions/{ID}"
| 43 | 66 | 0.761628 | 23 | 172 | 5.608696 | 0.826087 | 0.232558 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052326 | 172 | 3 | 67 | 57.333333 | 0.791411 | 0 | 0 | 0 | 0 | 0 | 0.616279 | 0.30814 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
be44d8527e6b58c546459f50a38c8c28c7b75321 | 7,487 | py | Python | test/common/test_inventory_record.py | polarG/splunk-connect-for-snmp | d1e85675edd5caa5bad9114d1611411e15cec063 | [
"Apache-2.0"
] | null | null | null | test/common/test_inventory_record.py | polarG/splunk-connect-for-snmp | d1e85675edd5caa5bad9114d1611411e15cec063 | [
"Apache-2.0"
] | null | null | null | test/common/test_inventory_record.py | polarG/splunk-connect-for-snmp | d1e85675edd5caa5bad9114d1611411e15cec063 | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from splunk_connect_for_snmp.common.inventory_record import InventoryRecord
class TestInventoryRecord(TestCase):
def test_address_not_none(self):
ir_dict = {"address": None}
with self.assertRaises(ValueError) as e:
InventoryRecord(**ir_dict)
self.assertEqual(
"field address cannot be null", e.exception.args[0][0].exc.args[0]
)
def test_address_not_commented(self):
ir_dict = {"address": "#asd"}
with self.assertRaises(ValueError) as e:
InventoryRecord(**ir_dict)
self.assertEqual(
"field address cannot be commented", e.exception.args[0][0].exc.args[0]
)
def test_address_not_resolvable(self):
ir_dict = {"address": "12313sdfsf"}
with self.assertRaises(ValueError) as e:
InventoryRecord(**ir_dict)
self.assertEqual(
"field address must be an IP or a resolvable hostname 12313sdfsf",
e.exception.args[0][0].exc.args[0],
)
def test_port_too_high(self):
ir_dict = {
"address": "192.168.0.1",
"port": 65537,
"version": "2c",
"walk_interval": 1850,
"SmartProfiles": True,
"delete": "",
}
with self.assertRaises(ValueError) as e:
InventoryRecord(**ir_dict)
self.assertEqual("Port out of range 65537", e.exception.args[0][0].exc.args[0])
def test_version_none(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": None,
"community": "public",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": 1850,
"profiles": "",
"SmartProfiles": True,
"delete": False,
}
ir = InventoryRecord(**ir_dict)
self.assertEqual("2c", ir.version)
def test_version_out_of_range(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": "5a",
"community": "public",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": 1850,
"profiles": "",
"SmartProfiles": True,
"delete": False,
}
with self.assertRaises(ValueError) as e:
InventoryRecord(**ir_dict)
self.assertEqual(
"version out of range 5a accepted is 1 or 2c or 3",
e.exception.args[0][0].exc.args[0],
)
def test_empty_community(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": "3",
"community": "",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": 1850,
"profiles": "",
"SmartProfiles": True,
"delete": False,
}
ir = InventoryRecord(**ir_dict)
self.assertIsNone(ir.community)
def test_empty_walk_interval(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": "3",
"community": "public",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": None,
"profiles": "",
"SmartProfiles": True,
"delete": False,
}
ir = InventoryRecord(**ir_dict)
self.assertEqual(42000, ir.walk_interval)
def test_too_low_walk_interval(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": "3",
"community": "public",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": 20,
"profiles": "",
"SmartProfiles": True,
"delete": False,
}
ir = InventoryRecord(**ir_dict)
self.assertEqual(1800, ir.walk_interval)
def test_too_high_walk_interval(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": "3",
"community": "public",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": 50000,
"profiles": "",
"SmartProfiles": True,
"delete": False,
}
ir = InventoryRecord(**ir_dict)
self.assertEqual(42000, ir.walk_interval)
def test_profiles_not_string(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": "3",
"community": "public",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": 1850,
"profiles": [],
"SmartProfiles": True,
"delete": False,
}
ir = InventoryRecord(**ir_dict)
self.assertEqual([], ir.profiles)
def test_smart_profiles_empty(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": "3",
"community": "public",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": 1850,
"profiles": "",
"SmartProfiles": True,
"delete": False,
}
ir = InventoryRecord(**ir_dict)
self.assertTrue(ir.SmartProfiles)
def test_delete_empty(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": "3",
"community": "public",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": 1850,
"profiles": "",
"SmartProfiles": True,
"delete": "",
}
ir = InventoryRecord(**ir_dict)
self.assertFalse(ir.delete)
def test_from_json(self):
ir = InventoryRecord.from_json(
'{"address": "192.168.0.1", "port": "34", "version": "3", "community": '
'"public", "secret": "secret", "securityEngine": "ENGINE", "walk_interval": '
'1850, "profiles": "", "SmartProfiles": true, "delete": ""}'
)
self.assertEqual(ir.address, "192.168.0.1")
self.assertEqual(ir.port, 34)
self.assertEqual(ir.version, "3")
self.assertEqual(ir.community, "public")
self.assertEqual(ir.secret, "secret")
self.assertEqual(ir.securityEngine, "ENGINE")
self.assertEqual(ir.walk_interval, 1850)
self.assertEqual(ir.profiles, [])
self.assertEqual(ir.SmartProfiles, True)
self.assertEqual(ir.delete, False)
def test_to_json(self):
ir_dict = {
"address": "192.168.0.1",
"port": "34",
"version": "3",
"community": "public",
"secret": "secret",
"securityEngine": "ENGINE",
"walk_interval": 1850,
"profiles": "",
"SmartProfiles": True,
"delete": "",
}
ir = InventoryRecord(**ir_dict)
self.assertEqual(
'{"address": "192.168.0.1", "port": 34, "version": "3", "community": '
'"public", "secret": "secret", "securityEngine": "ENGINE", "walk_interval": '
'1850, "profiles": [], "SmartProfiles": true, "delete": false}',
ir.to_json(),
)
| 30.311741 | 89 | 0.497663 | 696 | 7,487 | 5.218391 | 0.127874 | 0.046256 | 0.038546 | 0.065529 | 0.737335 | 0.726322 | 0.715859 | 0.715859 | 0.715859 | 0.707874 | 0 | 0.052784 | 0.352211 | 7,487 | 246 | 90 | 30.434959 | 0.696082 | 0 | 0 | 0.650943 | 0 | 0.009434 | 0.25591 | 0 | 0 | 0 | 0 | 0 | 0.136792 | 1 | 0.070755 | false | 0 | 0.009434 | 0 | 0.084906 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
be76cdc119d092e0171003568528e58a8611795b | 288 | py | Python | vmlib/custom_exceptions.py | valentinmetraux/vmlib | 0a04aeb486b1b6ed4973041807e49ca5a59600e9 | [
"MIT"
] | null | null | null | vmlib/custom_exceptions.py | valentinmetraux/vmlib | 0a04aeb486b1b6ed4973041807e49ca5a59600e9 | [
"MIT"
] | null | null | null | vmlib/custom_exceptions.py | valentinmetraux/vmlib | 0a04aeb486b1b6ed4973041807e49ca5a59600e9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
class SimpleExitException(Exception):
"""
Custom exception showing a simple exit program message
"""
def __init__(self):
pass
def __str__(self):
return 'Script terminated'.upper()
def __repr__(self):
return ''
| 16 | 58 | 0.59375 | 29 | 288 | 5.482759 | 0.793103 | 0.125786 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0.288194 | 288 | 17 | 59 | 16.941176 | 0.770732 | 0.267361 | 0 | 0 | 0 | 0 | 0.087179 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.142857 | 0 | 0.285714 | 0.857143 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
be8b1eb0fcc45323576643008912f03114d49df4 | 87 | py | Python | pub_data_visualization/production/load/eco2mix/__init__.py | cre-os/pub-data-visualization | e5ec45e6397258646290836fc1a3b39ad69bf266 | [
"MIT"
] | 10 | 2020-10-08T11:35:49.000Z | 2021-01-22T16:47:59.000Z | pub_data_visualization/production/load/eco2mix/__init__.py | l-leo/pub-data-visualization | 68eea00491424581b057495a7f0f69cf74e16e7d | [
"MIT"
] | 3 | 2021-03-15T14:26:43.000Z | 2021-12-02T15:27:49.000Z | pub_data_visualization/production/load/eco2mix/__init__.py | cre-dev/pub-data-visualization | 229bb7a543684be2cb06935299345ce3263da946 | [
"MIT"
] | 1 | 2021-01-22T16:47:10.000Z | 2021-01-22T16:47:10.000Z |
"""
Module to load production data provided by eCO2mix.
"""
from .load import * | 10.875 | 55 | 0.655172 | 11 | 87 | 5.181818 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 0.241379 | 87 | 8 | 56 | 10.875 | 0.848485 | 0.586207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
be8cf68de1d5d3990d599543d2348b7009a5e004 | 8,516 | py | Python | tests/unit/transports/socket_transport_test.py | simomo/haigha | 59f59e58fd6edb6a4dc2a9e89e7aeff8bfaceaf0 | [
"BSD-3-Clause"
] | null | null | null | tests/unit/transports/socket_transport_test.py | simomo/haigha | 59f59e58fd6edb6a4dc2a9e89e7aeff8bfaceaf0 | [
"BSD-3-Clause"
] | null | null | null | tests/unit/transports/socket_transport_test.py | simomo/haigha | 59f59e58fd6edb6a4dc2a9e89e7aeff8bfaceaf0 | [
"BSD-3-Clause"
] | null | null | null | '''
Copyright (c) 2011-2014, Agora Games, LLC All rights reserved.
https://github.com/agoragames/haigha/blob/master/LICENSE.txt
'''
from chai import Chai
import errno
import socket
from haigha.transports import socket_transport
from haigha.transports.socket_transport import *
class SocketTransportTest(Chai):
def setUp(self):
super(SocketTransportTest, self).setUp()
self.connection = mock()
self.transport = SocketTransport(self.connection)
self.transport._host = 'server:1234'
def test_init(self):
assert_equals(bytearray(), self.transport._buffer)
assert_true(self.transport._synchronous)
def test_connect_with_no_klass_arg(self):
klass = mock()
sock = mock()
orig_defaults = self.transport.connect.im_func.func_defaults
self.transport.connect.im_func.func_defaults = (klass,)
expect(klass).returns(sock)
self.connection._connect_timeout = 4.12
self.connection._sock_opts = {
('family', 'tcp'): 34,
('range', 'ipv6'): 'hex'
}
expect(sock.setblocking).args(True)
expect(sock.settimeout).args(4.12)
expect(sock.setsockopt).any_order().args(
'family', 'tcp', 34).any_order()
expect(sock.setsockopt).any_order().args(
'range', 'ipv6', 'hex').any_order()
expect(sock.connect).args(('host', 5309))
expect(sock.settimeout).args(None)
self.transport.connect(('host', 5309))
self.transport.connect.im_func.func_defaults = orig_defaults
def test_connect_with_klass_arg(self):
klass = mock()
sock = mock()
expect(klass).returns(sock)
self.connection._connect_timeout = 4.12
self.connection._sock_opts = {
('family', 'tcp'): 34,
('range', 'ipv6'): 'hex'
}
expect(sock.setblocking).args(True)
expect(sock.settimeout).args(4.12)
expect(sock.setsockopt).any_order().args(
'family', 'tcp', 34).any_order()
expect(sock.setsockopt).any_order().args(
'range', 'ipv6', 'hex').any_order()
expect(sock.connect).args(('host', 5309))
expect(sock.settimeout).args(None)
self.transport.connect(('host', 5309), klass=klass)
def test_read(self):
self.transport._sock = mock()
self.transport.connection.debug = False
expect(self.transport._sock.settimeout).args(None)
expect(self.transport._sock.getsockopt).args(
socket.SOL_SOCKET, socket.SO_RCVBUF).returns(4095)
expect(self.transport._sock.recv).args(4095).returns('buffereddata')
assert_equals('buffereddata', self.transport.read())
def test_read_when_data_buffered(self):
self.transport._sock = mock()
self.transport.connection.debug = False
self.transport._buffer = bytearray('buffered')
expect(self.transport._sock.settimeout).args(3)
expect(self.transport._sock.getsockopt).any_args().returns(4095)
expect(self.transport._sock.recv).args(4095).returns('data')
assert_equals('buffereddata', self.transport.read(3))
assert_equals(bytearray(), self.transport._buffer)
def test_read_when_debugging(self):
self.transport._sock = mock()
self.transport.connection.debug = 2
expect(self.transport._sock.settimeout).args(None)
expect(self.transport._sock.getsockopt).any_args().returns(4095)
expect(self.transport._sock.recv).args(4095).returns('buffereddata')
expect(self.transport.connection.logger.debug).args(
'read 12 bytes from server:1234')
assert_equals('buffereddata', self.transport.read(0))
def test_read_when_socket_closes(self):
self.transport._sock = mock()
self.transport.connection.debug = 2
expect(self.transport._sock.settimeout).args(None)
expect(self.transport._sock.getsockopt).any_args().returns(4095)
expect(self.transport._sock.recv).args(4095).returns('')
expect(self.transport.connection.transport_closed).args(
msg='error reading from server:1234')
self.transport.read()
def test_read_when_socket_timeout(self):
self.transport._sock = mock()
self.transport.connection.debug = 2
expect(self.transport._sock.settimeout).args(42)
expect(self.transport._sock.getsockopt).any_args().returns(4095)
expect(self.transport._sock.recv).args(4095).raises(
socket.timeout('not now'))
assert_equals(None, self.transport.read(42))
def test_read_when_raises_eagain(self):
self.transport._sock = mock()
self.transport.connection.debug = 2
expect(self.transport._sock.settimeout).args(42)
expect(self.transport._sock.getsockopt).any_args().returns(4095)
expect(self.transport._sock.recv).args(4095).raises(
EnvironmentError(errno.EAGAIN, 'tryagainlater'))
assert_equals(None, self.transport.read(42))
def test_read_when_raises_socket_timeout(self):
self.transport._sock = mock()
self.transport.connection.debug = 2
expect(self.transport._sock.settimeout).args(42)
expect(self.transport._sock.getsockopt).any_args().returns(4095)
expect(self.transport._sock.recv).args(4095).raises(
socket.timeout())
assert_equals(None, self.transport.read(42))
def test_read_when_raises_other_errno(self):
self.transport._sock = mock()
self.transport.connection.debug = 2
expect(self.transport._sock.settimeout).args(42)
expect(self.transport._sock.getsockopt).any_args().returns(4095)
expect(self.transport._sock.recv).args(4095).raises(
EnvironmentError(errno.EBADF, 'baddog'))
expect(self.transport.connection.logger.exception).args(
'error reading from server:1234')
expect(self.transport.connection.transport_closed).args(
msg='error reading from server:1234')
with assert_raises(EnvironmentError):
self.transport.read(42)
def test_read_when_no_sock(self):
self.transport.read()
def test_buffer(self):
self.transport._sock = mock()
self.transport.buffer(bytearray('somedata'))
assert_equals(bytearray('somedata'), self.transport._buffer)
def test_buffer_when_already_buffered(self):
self.transport._sock = mock()
self.transport._buffer = bytearray('some')
self.transport.buffer(bytearray('data'))
assert_equals(bytearray('somedata'), self.transport._buffer)
def test_buffer_when_no_sock(self):
self.transport.buffer('somedata')
def test_write(self):
self.transport._sock = mock()
self.transport.connection.debug = False
expect(self.transport._sock.sendall).args('somedata')
self.transport.write('somedata')
def test_write_when_sendall_fails(self):
self.transport._sock = mock()
self.transport.connection.debug = False
expect(self.transport._sock.sendall).args(
'somedata').raises(Exception('fail'))
assert_raises(Exception, self.transport.write, 'somedata')
def test_write_when_sendall_raises_environmenterror(self):
self.transport._sock = mock()
self.transport.connection.debug = False
expect(self.transport._sock.sendall).args('somedata').raises(
EnvironmentError(errno.EAGAIN, 'tryagainlater'))
expect(self.transport.connection.logger.exception).args(
'error writing to server:1234')
expect(self.transport.connection.transport_closed).args(
msg='error writing to server:1234')
self.transport.write('somedata')
def test_write_when_debugging(self):
self.transport._sock = mock()
self.transport.connection.debug = 2
expect(self.transport._sock.sendall).args('somedata')
expect(self.transport.connection.logger.debug).args(
'sent 8 bytes to server:1234')
self.transport.write('somedata')
def test_write_when_no_sock(self):
self.transport.write('somedata')
def test_disconnect(self):
self.transport._sock = mock()
expect(self.transport._sock.close)
self.transport.disconnect()
assert_equals(None, self.transport._sock)
def test_disconnect_when_no_sock(self):
self.transport.disconnect()
| 36.084746 | 76 | 0.662987 | 986 | 8,516 | 5.543611 | 0.125761 | 0.230699 | 0.139956 | 0.122027 | 0.82876 | 0.796012 | 0.733992 | 0.685145 | 0.641237 | 0.603915 | 0 | 0.025683 | 0.209018 | 8,516 | 235 | 77 | 36.238298 | 0.785778 | 0.014561 | 0 | 0.6 | 0 | 0 | 0.064528 | 0 | 0 | 0 | 0 | 0 | 0.08 | 1 | 0.131429 | false | 0 | 0.028571 | 0 | 0.165714 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
be9d09fbbc96ef769180ca6f4343d3c8bae7bfdd | 137 | py | Python | django_pwned_passwords/apps.py | jamiecounsell/django-pwned-passwords | b18c4a73b497555c2d6672455eb1fd9d69201e70 | [
"MIT"
] | 24 | 2017-08-05T22:48:21.000Z | 2022-01-31T08:10:28.000Z | django_pwned_passwords/apps.py | jamiecounsell/django-pwned-passwords | b18c4a73b497555c2d6672455eb1fd9d69201e70 | [
"MIT"
] | 9 | 2018-03-06T14:49:24.000Z | 2020-10-22T18:05:23.000Z | django_pwned_passwords/apps.py | jamiecounsell/django-pwned-passwords | b18c4a73b497555c2d6672455eb1fd9d69201e70 | [
"MIT"
] | 7 | 2018-02-28T22:00:39.000Z | 2022-01-08T18:59:34.000Z | # -*- coding: utf-8
from django.apps import AppConfig
class DjangoPwnedPasswordsConfig(AppConfig):
name = 'django_pwned_passwords'
| 19.571429 | 44 | 0.766423 | 15 | 137 | 6.866667 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008475 | 0.138686 | 137 | 6 | 45 | 22.833333 | 0.864407 | 0.124088 | 0 | 0 | 0 | 0 | 0.186441 | 0.186441 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.666667 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
bebffe549a5895c2a2397685dfbac64d5feef6ae | 74 | py | Python | transforms/spectrogram.py | koukyo1994/kaggle-rfcx | c3573d014d99312b58882e7b939de6c1055129b1 | [
"MIT"
] | 6 | 2021-02-18T05:18:17.000Z | 2022-02-19T02:49:32.000Z | transforms/spectrogram.py | koukyo1994/kaggle-rfcx | c3573d014d99312b58882e7b939de6c1055129b1 | [
"MIT"
] | null | null | null | transforms/spectrogram.py | koukyo1994/kaggle-rfcx | c3573d014d99312b58882e7b939de6c1055129b1 | [
"MIT"
] | 2 | 2021-02-18T11:31:50.000Z | 2022-02-19T02:49:07.000Z | def get_spectrogram_transforms(config: dict, phase: str):
return None
| 24.666667 | 57 | 0.77027 | 10 | 74 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148649 | 74 | 2 | 58 | 37 | 0.873016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
bec1fe4224049fad313dcde78e37ad76a44f0053 | 319 | py | Python | oslo/torch/nn/parallel/data_parallel/__init__.py | lipovsek/oslo | c2cde6229068808bf691e200f8af8c97c1631eb4 | [
"Apache-2.0"
] | 249 | 2021-12-21T05:25:53.000Z | 2022-03-21T21:03:58.000Z | oslo/torch/nn/parallel/data_parallel/__init__.py | lipovsek/oslo | c2cde6229068808bf691e200f8af8c97c1631eb4 | [
"Apache-2.0"
] | 21 | 2021-12-22T13:22:18.000Z | 2022-03-31T17:38:53.000Z | oslo/torch/nn/parallel/data_parallel/__init__.py | lipovsek/oslo | c2cde6229068808bf691e200f8af8c97c1631eb4 | [
"Apache-2.0"
] | 14 | 2021-12-21T10:28:36.000Z | 2022-03-29T12:35:44.000Z | from oslo.torch.nn.parallel.data_parallel.distributed_data_parallel import (
DistributedDataParallel,
)
from oslo.torch.nn.parallel.data_parallel.fully_sharded_data_parallel import (
FullyShardedDataParallel,
)
from oslo.torch.nn.parallel.data_parallel.sharded_data_parallel import (
ShardedDataParallel,
)
| 31.9 | 78 | 0.830721 | 37 | 319 | 6.891892 | 0.351351 | 0.282353 | 0.152941 | 0.176471 | 0.411765 | 0.411765 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0.094044 | 319 | 9 | 79 | 35.444444 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
bec3ed24d5ca0dc2379aae7dc04764241cfe56e7 | 23 | py | Python | src/devapps/version.py | axiros/DevApps | 8a246ab08f2ed6b4dcbf5efb50326f9add66df49 | [
"Apache-2.0",
"MIT"
] | null | null | null | src/devapps/version.py | axiros/DevApps | 8a246ab08f2ed6b4dcbf5efb50326f9add66df49 | [
"Apache-2.0",
"MIT"
] | null | null | null | src/devapps/version.py | axiros/DevApps | 8a246ab08f2ed6b4dcbf5efb50326f9add66df49 | [
"Apache-2.0",
"MIT"
] | null | null | null | __version__ = 20181022
| 11.5 | 22 | 0.826087 | 2 | 23 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 0.130435 | 23 | 1 | 23 | 23 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fe3fd2d6885fb16a7822305b2cc927d43e677a62 | 44 | py | Python | python/testData/codeInsight/smartEnter/forFirst_after.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/codeInsight/smartEnter/forFirst_after.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/codeInsight/smartEnter/forFirst_after.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | def foo():
for <caret> in :
pass | 14.666667 | 20 | 0.454545 | 6 | 44 | 3.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.409091 | 44 | 3 | 21 | 14.666667 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.333333 | 0 | null | null | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
fe4da2163dd6138e07e65ec7f4fb6c921c3a439f | 7,646 | py | Python | tests/query/v2/test_optimizer.py | critical27/nebula-graph | 04d00e779e860ed3ddb226c416c335a22acc1147 | [
"Apache-2.0"
] | null | null | null | tests/query/v2/test_optimizer.py | critical27/nebula-graph | 04d00e779e860ed3ddb226c416c335a22acc1147 | [
"Apache-2.0"
] | null | null | null | tests/query/v2/test_optimizer.py | critical27/nebula-graph | 04d00e779e860ed3ddb226c416c335a22acc1147 | [
"Apache-2.0"
] | null | null | null | # --coding:utf-8--
#
# Copyright (c) 2020 vesoft inc. All rights reserved.
#
# This source code is licensed under Apache 2.0 License,
# attached with Common Clause Condition 1.0, found in the LICENSES directory.
import pytest
from tests.common.nebula_test_suite import NebulaTestSuite
class TestOptimizer(NebulaTestSuite):
@classmethod
def prepare(cls):
cls.use_nba()
def test_PushFilterDownGetNbrsRule(self):
resp = self.execute_query('''
GO 1 STEPS FROM "Boris Diaw" OVER serve
WHERE $^.player.age > 18 YIELD serve.start_year as start_year
''')
expected_plan = [
["Project", [1]],
["GetNeighbors", [2], ['($^.player.age>18)']],
["Start", []]
]
expected_data = [[2003], [2005], [2008], [2012], [2016]]
self.check_exec_plan(resp, expected_plan)
self.check_out_of_order_result(resp, expected_data)
resp = self.execute_query('''
GO 1 STEPS FROM "James Harden" OVER like REVERSELY
WHERE $^.player.age > 18 YIELD like.likeness as likeness
''')
expected_plan = [
["Project", [1]],
["GetNeighbors", [2], ['($^.player.age>18)']],
["Start", []]
]
expected_data = [[90], [80], [99]]
self.check_exec_plan(resp, expected_plan)
self.check_out_of_order_result(resp, expected_data)
resp = self.execute_query('''
GO 1 STEPS FROM "Boris Diaw" OVER serve
WHERE serve.start_year > 2005 YIELD serve.start_year as start_year
''')
expected_plan = [
["Project", [1]],
["GetNeighbors", [2], ['(serve.start_year>2005)']],
["Start", []]
]
expected_data = [[2008], [2012], [2016]]
self.check_exec_plan(resp, expected_plan)
self.check_out_of_order_result(resp, expected_data)
resp = self.execute_query('''
GO 1 STEPS FROM "Lakers" OVER serve REVERSELY
WHERE serve.start_year < 2017 YIELD serve.start_year as start_year
''')
expected_plan = [
["Project", [1]],
["GetNeighbors", [2], ['(serve.start_year<2017)']],
["Start", []]
]
expected_data = [[2012], [1996], [2008], [1996], [2012]]
self.check_exec_plan(resp, expected_plan)
self.check_out_of_order_result(resp, expected_data)
@pytest.mark.skip(reason="Depends on other opt rules to eliminate duplicate project nodes")
def test_PushFilterDownGetNbrsRule_Failed(self):
resp = self.execute_query('''
GO 1 STEPS FROM "Boris Diaw" OVER serve
WHERE $^.player.age > 18 AND $$.team.name == "Lakers"
YIELD $^.player.name AS name
''')
expected_plan = [
["Project", [1]],
["Filter", [2], ['($$.team.name=="Lakers")']],
["GetNeighbors", [3], ['($^.player.age>18)']],
["Start", []]
]
expected_data = [['Boris Diaw']]
self.check_exec_plan(resp, expected_plan)
self.check_out_of_order_result(resp, expected_data)
resp = self.execute_query('''
GO 1 STEPS FROM "Boris Diaw" OVER serve
WHERE $^.player.age > 18 OR $$.team.name == "Lakers"
YIELD $^.player.name AS name
''')
expected_plan = [
["Project", [1]],
["Filter", [2], ['($^.player.age>18) OR ($$.team.name=="Lakers")']]
["GetNeighbors", [3]],
["Start", []]
]
expected_data = [['Boris Diaw']]
self.check_exec_plan(resp, expected_plan)
self.check_out_of_order_result(resp, expected_data)
# fail to optimize cases
resp = self.execute_query('''
GO 1 STEPS FROM "Boris Diaw" OVER serve \
WHERE $$.team.name == "Lakers" YIELD $^.player.name AS name
''')
expected_plan = [
["Project", [1]],
["Filter", [2]],
["GetNeighbors", [3]],
["Start", []]
]
expected_data = [['Boris Diaw']]
self.check_exec_plan(resp, expected_plan)
self.check_out_of_order_result(resp, expected_data)
def test_TopNRule(self):
resp = self.execute_query('''
GO 1 STEPS FROM "Marco Belinelli" OVER like
YIELD like.likeness AS likeness
| ORDER BY likeness
| LIMIT 2
''')
expected_plan = [
["DataCollect", [1]],
["TopN", [2]],
["Project", [3]],
["GetNeighbors", [4]],
["Start", []]
]
expected_data = [[50], [55]]
self.check_exec_plan(resp, expected_plan)
self.check_result(resp, expected_data)
resp = self.execute_query('''
GO 1 STEPS FROM "Marco Belinelli" OVER like REVERSELY
YIELD like.likeness AS likeness |
ORDER BY likeness |
LIMIT 1
''')
expected_plan = [
["DataCollect", [1]],
["TopN", [2]],
["Project", [3]],
["GetNeighbors", [4]],
["Start", []]
]
expected_data = [[83]]
self.check_exec_plan(resp, expected_plan)
self.check_result(resp, expected_data)
def test_TopNRule_Failed(self):
resp = self.execute_query('''
GO 1 STEPS FROM "Marco Belinelli" OVER like
YIELD like.likeness as likeness
| ORDER BY likeness
| LIMIT 2, 3
''')
expected_plan = [
["DataCollect", [1]],
["Limit", [2]],
["Sort", [3]],
["Project", [4]],
["GetNeighbors", [5]],
["Start", []]
]
expected_data = [[60]]
self.check_exec_plan(resp, expected_plan)
self.check_result(resp, expected_data)
resp = self.execute_query('''
GO 1 STEPS FROM "Marco Belinelli" OVER like
YIELD like.likeness AS likeness
| ORDER BY likeness
''')
expected_plan = [
["DataCollect", [1]],
["Sort", [2]],
["Project", [3]],
["GetNeighbors", [4]],
["Start", []]
]
expected_data = [[50], [55], [60]]
self.check_exec_plan(resp, expected_plan)
self.check_result(resp, expected_data)
def test_LimitPushDownRule(self):
resp = self.execute_query('''
GO 1 STEPS FROM "James Harden" OVER like REVERSELY
| Limit 2
''')
expected_plan = [
["DataCollect", [1]],
["Limit", [2]],
["Project", [3]],
["GetNeighbors", [4], ['2']],
["Start", []]
]
# expected_data = [[90], [80], [99]]
self.check_exec_plan(resp, expected_plan)
if resp.data is None:
assert False, 'resp.data is None'
assert len(resp.data.rows) == 2
resp = self.execute_query('''
GO 1 STEPS FROM "Vince Carter" OVER serve
YIELD serve.start_year as start_year
| Limit 3, 4
''')
expected_plan = [
["DataCollect", [1]],
["Limit", [2]],
["Project", [3]],
["GetNeighbors", [4], ['7']],
["Start", []]
]
# expected_data = [[1998], [2004], [2009], [2010], [2011], [2014], [2017], [2018]]
self.check_exec_plan(resp, expected_plan)
assert resp.data is not None, 'resp.data is None'
assert len(resp.data.rows) == 4
| 34.286996 | 95 | 0.515041 | 811 | 7,646 | 4.679408 | 0.182491 | 0.082213 | 0.051383 | 0.068511 | 0.791568 | 0.767852 | 0.755468 | 0.725165 | 0.716733 | 0.684848 | 0 | 0.042868 | 0.337954 | 7,646 | 222 | 96 | 34.441441 | 0.706835 | 0.044206 | 0 | 0.641026 | 0 | 0 | 0.361606 | 0.01288 | 0 | 0 | 0 | 0 | 0.020513 | 1 | 0.030769 | false | 0 | 0.010256 | 0 | 0.046154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fe5aca361e7879ff77b515afd0fbb1ef94da56df | 1,658 | py | Python | zhmccli/__init__.py | zhmcclient/zhmccli | 946e104ee37606afed9376c7a5ee935d5fcfcda2 | [
"Apache-2.0"
] | 7 | 2019-05-14T10:03:39.000Z | 2022-02-22T08:57:29.000Z | zhmccli/__init__.py | zhmcclient/zhmccli | 946e104ee37606afed9376c7a5ee935d5fcfcda2 | [
"Apache-2.0"
] | 257 | 2017-09-21T09:11:46.000Z | 2022-03-31T13:59:01.000Z | zhmccli/__init__.py | zhmcclient/zhmccli | 946e104ee37606afed9376c7a5ee935d5fcfcda2 | [
"Apache-2.0"
] | 4 | 2018-11-27T14:49:49.000Z | 2021-02-20T04:59:40.000Z | # Copyright 2016-2019 IBM Corp. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
zhmccli - A CLI for the IBM Z HMC, written in pure Python.
"""
from __future__ import absolute_import
from ._version import * # noqa: F401
from ._cmd_info import * # noqa: F401
from ._cmd_session import * # noqa: F401
from ._cmd_cpc import * # noqa: F401
from ._cmd_lpar import * # noqa: F401
from ._cmd_partition import * # noqa: F401
from ._cmd_adapter import * # noqa: F401
from ._cmd_port import * # noqa: F401
from ._cmd_hba import * # noqa: F401
from ._cmd_nic import * # noqa: F401
from ._cmd_vfunction import * # noqa: F401
from ._cmd_vswitch import * # noqa: F401
from ._cmd_metrics import * # noqa: F401
from ._cmd_storagegroup import * # noqa: F401
from ._cmd_storagevolume import * # noqa: F401
from ._cmd_vstorageresource import * # noqa: F401
from ._cmd_capacitygroup import * # noqa: F401
from ._cmd_user import * # noqa: F401
from ._cmd_user_role import * # noqa: F401
from ._cmd_password_rule import * # noqa: F401
from ._cmd_character_rule import * # noqa: F401
| 39.47619 | 74 | 0.712907 | 239 | 1,658 | 4.740586 | 0.426778 | 0.185349 | 0.259488 | 0.317741 | 0.377758 | 0.044131 | 0 | 0 | 0 | 0 | 0 | 0.057034 | 0.206876 | 1,658 | 41 | 75 | 40.439024 | 0.804563 | 0.519904 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.045455 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
fe5bec8bc9660632a7fbb77df129992900e91ee4 | 10,261 | py | Python | pecos/qeccs/surface_4444/instructions.py | quantum-pecos/PECOS | 44bc614a9152f3b316bacef6ca034f6a8a611293 | [
"Apache-2.0"
] | 15 | 2019-04-11T16:02:38.000Z | 2022-03-15T16:56:36.000Z | pecos/qeccs/surface_4444/instructions.py | quantum-pecos/PECOS | 44bc614a9152f3b316bacef6ca034f6a8a611293 | [
"Apache-2.0"
] | 4 | 2018-10-04T19:30:09.000Z | 2019-03-12T19:00:34.000Z | pecos/qeccs/surface_4444/instructions.py | quantum-pecos/PECOS | 44bc614a9152f3b316bacef6ca034f6a8a611293 | [
"Apache-2.0"
] | 3 | 2020-10-07T16:47:16.000Z | 2022-02-01T05:34:54.000Z | from ..instruction_parent_class import LogicalInstruction
from ...circuits.quantum_circuit import QuantumCircuit
from ..helper_functions import pos2qudit
class InstrSynExtraction(LogicalInstruction):
"""
Instruction for a round of syndrome extraction.
Parent class sets self.qecc.
"""
def __init__(self, qecc, symbol, **gate_params):
super().__init__(qecc, symbol, **gate_params)
self.symbol = 'instr_syn_extract'
self.init_ticks = gate_params.get('init_ticks', 0)
self.meas_ticks = gate_params.get('meas_ticks', 7)
self.data_ticks = gate_params.get('data_ticks', [2, 4, 3, 5])
self.abstract_circuit = QuantumCircuit(track_qudits=False, **gate_params)
self.data_qudit_set = self.qecc.data_qudit_set
self.ancilla_qudit_set = self.qecc.ancilla_qudit_set
self.ancilla_x_check = set([])
self.ancilla_z_check = set([])
# Go through the ancillas and grab the data qubits that are on either side of it.
layout = qecc.layout # qudit_id => (x, y)
self.pos2qudit = pos2qudit(layout)
for q, (x, y) in layout.items():
if x % 2 == 1 and y % 2 == 0:
# X ancilla
self._create_x_check(q, x, y)
elif x % 2 == 0 and y % 2 == 1:
# Z ancilla
self._create_z_check(q, x, y)
# Determine the logical operations
# --------------------------------
z_qudits = set(qecc.sides['top'])
x_qudits = set(qecc.sides['left'])
logical_ops = [ # Each element in the list corresponds to a logical qubit
# The keys label the type of logical operator
{'X': QuantumCircuit([{'X': x_qudits}]), 'Z': QuantumCircuit([{'Z': z_qudits}])},
]
self.initial_logical_ops = logical_ops
logical_ops = [ # Each element in the list corresponds to a logical qubit
# The keys label the type of logical operator
{'X': QuantumCircuit([{'X': x_qudits}]), 'Z': QuantumCircuit([{'Z': z_qudits}])},
]
self.final_logical_ops = logical_ops
self.logical_signs = None
self.logical_stabilizers = None
# Must be called at the end of initiation.
self._compile_circuit(self.abstract_circuit)
def _create_x_check(self, ancilla, x, y):
"""
Creates X-checks for circuit_extended.
"""
# register the x syndrome ancillas
self.ancilla_x_check.add(ancilla)
# get where the position of where the data qubits should be relative to the ancilla
data_pos = self._data_pos_x_check(x, y)
# Get the actual, available data-qubits and their ticks that correspond to the possible data qubit positions
datas, my_data_ticks = self._find_data(position_to_qudit=self.pos2qudit, positions=data_pos,
# ticks=self.x_ticks)
ticks=self.data_ticks)
# Now add the check to the extended circuit
locations = set(datas)
locations.add(ancilla)
self.abstract_circuit.append('X check', locations=locations, datas=datas, ancillas=ancilla,
ancilla_ticks=self.init_ticks, data_ticks=my_data_ticks,
meas_ticks=self.meas_ticks)
def _create_z_check(self, ancilla, x, y):
"""
Creates Z-checks for circuit_extended.
"""
# register the z syndrome ancillas
self.ancilla_z_check.add(ancilla)
# get where the position of where the data qubits should be relative to the ancilla
data_pos = self._data_pos_z_check(x, y)
# Get the actual, available data-qubits and their ticks that correspond to the possible data qubit positions
datas, my_data_ticks = self._find_data(position_to_qudit=self.pos2qudit, positions=data_pos,
# ticks=self.z_ticks)
ticks=self.data_ticks)
# Now add the check to the extended circuit
locations = set(datas)
locations.add(ancilla)
self.abstract_circuit.append('Z check', locations=locations, datas=datas, ancillas=ancilla,
ancilla_ticks=self.init_ticks, data_ticks=my_data_ticks,
meas_ticks=self.meas_ticks)
@staticmethod
def _find_data(position_to_qudit, positions, ticks):
"""
From the positions given for possible data qudits, add the qudits and their corresponding ticks for each qudit
that does exist.
"""
data_list = []
tick_list = []
for i, p in enumerate(positions):
data = position_to_qudit.get(p, None)
if data is not None:
data_list.append(data)
tick_list.append(ticks[i])
return data_list, tick_list
@staticmethod
def _data_pos_z_check(x, y):
"""
Determines the position of data qudits in a Z check in order of ticks.
Check direction: 1 | 2
|
---+---
|
3 | 4
"""
data_pos = [
(x - 1, y),
(x, y + 1),
(x, y - 1),
(x + 1, y)
]
return data_pos
@staticmethod
def _data_pos_x_check(x, y):
"""
Determines the position of data qudits in a Z check in order of ticks.
Check direction: 1 | 3
|
---+---
|
2 | 4
"""
data_pos = [
(x - 1, y),
(x, y - 1),
(x, y + 1),
(x + 1, y)
]
return data_pos
class InstrInitZero(LogicalInstruction):
"""
Instruction for initializing a logical zero.
It is just like syndrome extraction except the data qubits are initialized in the zero state at tick = 0.
`ideal_meas` == True will cause the measurements to be replace with ideal measurements.
Parent class sets self.qecc.
"""
def __init__(self, qecc, symbol, **gate_params):
super().__init__(qecc, symbol, **gate_params)
self.symbol = 'instr_init_zero'
self.data_qudit_set = self.qecc.data_qudit_set
self.ancilla_qudit_set = self.qecc.ancilla_qudit_set
# This is basically syndrome extraction round where all the data qubits are initialized to zero.
syn_ext = qecc.instruction('instr_syn_extract', **gate_params)
# Make a shallow copy of the abstract circuits.
self.abstract_circuit = syn_ext.abstract_circuit.copy()
self.abstract_circuit.params.update(gate_params)
self.ancilla_x_check = syn_ext.ancilla_x_check
self.ancilla_z_check = syn_ext.ancilla_z_check
data_qudits = syn_ext.data_qudit_set
self.abstract_circuit.append('init |0>', locations=data_qudits, tick=0)
self.initial_logical_ops = [ # Each element in the list corresponds to a logical qubit
# The keys label the type of logical operator
{'X': None, 'Z': None}, # None => can be anything
]
# Special for state initialization:
# ---------------------------------
# list of tuples of logical check and delogical stabilizer for each logical qudit.
self.final_logical_ops = [
{'Z': QuantumCircuit([{'Z': set(qecc.sides['top'])}]), 'X': QuantumCircuit([{'X': set(qecc.sides['left'])}])}
]
# List of corresponding logical sign. (The logical sign if the instruction is preformed ideally.)
self.logical_signs = [0]
self.logical_stabilizers = ['Z']
# ---------------------------------
# Must be called at the end of initiation.
self._compile_circuit(self.abstract_circuit)
class InstrInitPlus(LogicalInstruction):
"""
Instruction for initializing a logical plus.
It is just like syndrome extraction except the data qubits are initialized in the plus state at tick = 0.
`ideal_meas` == True will cause the measurements to be replace with ideal measurements.
Parent class sets self.qecc.
"""
def __init__(self, qecc, symbol, **gate_params):
super().__init__(qecc, symbol, **gate_params)
self.symbol = 'instr_init_plus'
self.data_qudit_set = self.qecc.data_qudit_set
self.ancilla_qudit_set = self.qecc.ancilla_qudit_set
# This is basically syndrome extraction round where all the data qubits are initialized to zero.
syn_ext = qecc.instruction('instr_syn_extract', **gate_params)
# Make a shallow copy of the abstract circuits.
self.abstract_circuit = syn_ext.abstract_circuit.copy()
self.abstract_circuit.params.update(gate_params)
self.ancilla_x_check = syn_ext.ancilla_x_check
self.ancilla_z_check = syn_ext.ancilla_z_check
data_qudits = syn_ext.data_qudit_set
self.abstract_circuit.append('init |0>', locations=data_qudits, tick=0)
self.abstract_circuit.append('H', locations=data_qudits, tick=1)
self.initial_logical_ops = [ # Each element in the list corresponds to a logical qubit
# The keys label the type of logical operator
{'X': None, 'Z': None}, # None => can be anything
]
# Special for state initialization:
# ---------------------------------
# list of tuples of logical check and delogical stabilizer for each logical qudit.
self.final_logical_ops = [
{'X': QuantumCircuit([{'X': set(qecc.sides['left'])}]), 'Z': QuantumCircuit([{'Z': set(qecc.sides['top'])}])}
]
# List of corresponding logical sign. (The logical sign if the instruction is preformed ideally.)
self.logical_signs = [0]
self.logical_stabilizers = ['X']
# ---------------------------------
# Must be called at the end of initiation.
self._compile_circuit(self.abstract_circuit)
| 36.257951 | 121 | 0.591073 | 1,253 | 10,261 | 4.634477 | 0.141261 | 0.038746 | 0.042535 | 0.022042 | 0.775616 | 0.7689 | 0.727226 | 0.705528 | 0.705528 | 0.705528 | 0 | 0.006178 | 0.305916 | 10,261 | 282 | 122 | 36.386525 | 0.809183 | 0.330572 | 0 | 0.53125 | 0 | 0 | 0.028253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.023438 | 0 | 0.132813 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
fe736262cc3f69d303fe69eb226e61901d59dbe1 | 752 | py | Python | project/__main__.py | jaroslaw-wieczorek/TIIK | 74d72831735834d43e5965778851a2d2951346ec | [
"MIT"
] | null | null | null | project/__main__.py | jaroslaw-wieczorek/TIIK | 74d72831735834d43e5965778851a2d2951346ec | [
"MIT"
] | null | null | null | project/__main__.py | jaroslaw-wieczorek/TIIK | 74d72831735834d43e5965778851a2d2951346ec | [
"MIT"
] | null | null | null | from PyQt5.QtWidgets import QApplication
from PyQt5.QtWidgets import QMainWindow
from PyQt5.QtWidgets import QWidget
from PyQt5.QtWidgets import QFileDialog
from PyQt5.QtWidgets import QAction
from PyQt5.QtWidgets import QPushButton
from PyQt5.QtWidgets import QLabel
from PyQt5.QtWidgets import QLineEdit
from PyQt5.QtWidgets import QTextEdit
from PyQt5.QtWidgets import QPlainTextEdit
from PyQt5.QtWidgets import QLayout
from PyQt5.QtWidgets import QHBoxLayout
from PyQt5.QtWidgets import QVBoxLayout
from PyQt5.QtGui import QIcon
from pathlib import Path
import os
import sys
from project.src.gui.mainwindow import MainWindow
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = MainWindow()
sys.exit(app.exec_())
| 22.117647 | 49 | 0.8125 | 98 | 752 | 6.142857 | 0.357143 | 0.209302 | 0.388704 | 0.518272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021672 | 0.140957 | 752 | 33 | 50 | 22.787879 | 0.910217 | 0 | 0 | 0 | 0 | 0 | 0.010638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.818182 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
feab1d1cebedbf8a85b5199b0acb490847b20798 | 57 | py | Python | day_ok/schedule/bl/__init__.py | bostud/day_ok | 2bcee68252b698f5818808d1766fb3ec3f07fce8 | [
"MIT"
] | null | null | null | day_ok/schedule/bl/__init__.py | bostud/day_ok | 2bcee68252b698f5818808d1766fb3ec3f07fce8 | [
"MIT"
] | 16 | 2021-02-27T08:36:19.000Z | 2021-04-07T11:43:31.000Z | day_ok/schedule/bl/__init__.py | bostud/day_ok | 2bcee68252b698f5818808d1766fb3ec3f07fce8 | [
"MIT"
] | null | null | null | from .lessons import get_weekly_classroom_lessons_by_day
| 28.5 | 56 | 0.912281 | 9 | 57 | 5.222222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070175 | 57 | 1 | 57 | 57 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
22829b7017334a3083c6f8ff5d0f6c6998e13398 | 5,261 | py | Python | tests/test_weasyprint.py | danihodovic/wagtail-resume | a4283bd37f2ea8137f4d9ab84f066397112287de | [
"MIT"
] | null | null | null | tests/test_weasyprint.py | danihodovic/wagtail-resume | a4283bd37f2ea8137f4d9ab84f066397112287de | [
"MIT"
] | 1 | 2019-12-10T09:30:26.000Z | 2019-12-10T09:30:26.000Z | tests/test_weasyprint.py | danihodovic/wagtail-resume | a4283bd37f2ea8137f4d9ab84f066397112287de | [
"MIT"
] | null | null | null | import logging
import pytest
from django.urls import reverse
from wagtail.core.models import Site
from wagtail_resume.views import resume_pdf
from .models import CustomResumePage
pytestmark = pytest.mark.django_db
def test_weasyprint(client, mocker):
mocker.patch("wagtail_resume.views.HTML")
site = Site.objects.first()
resume = CustomResumePage(
title="Resume",
full_name="Adin Hodovic",
role="Software engineer",
pdf_generation_visibility="always",
)
site.root_page.add_child(instance=resume)
# Test random page pdf generation
url = f"{reverse('generate_resume_pdf')}?page_id={resume.id}"
res = client.get(url)
assert "adin-hodovic" in res["content-disposition"]
assert res.status_code == 200
assert res["content-type"] == "application/pdf"
def test_weasyprint_with_font(client, mocker):
mocker.patch("wagtail_resume.views.HTML")
site = Site.objects.first()
resume = CustomResumePage(
title="Resume",
full_name="Adin Hodovic",
role="Software engineer",
font="lato",
pdf_generation_visibility="always",
)
site.root_page.add_child(instance=resume)
# Test random page pdf generation
url = f"{reverse('generate_resume_pdf')}?page_id={resume.id}"
res = client.get(url)
assert "adin-hodovic" in res["content-disposition"]
assert res.status_code == 200
assert res["content-type"] == "application/pdf"
def test_weasyprint_unauthenticated(client, mocker):
mocker.patch("wagtail_resume.views.HTML")
site = Site.objects.first()
resume = CustomResumePage(
title="Resume",
full_name="Adin Hodovic",
role="Software engineer",
font="lato",
pdf_generation_visibility="authenticated",
)
site.root_page.add_child(instance=resume)
# Test random page pdf generation
url = f"{reverse('generate_resume_pdf')}?page_id={resume.id}"
res = client.get(url)
assert b"You need to be authenticated to generate a resume PDF file." in res.content
assert res.status_code == 403
def test_weasyprint_authenticated(rf, django_user_model, mocker):
mocker.patch("wagtail_resume.views.HTML")
site = Site.objects.first()
resume = CustomResumePage(
title="Resume",
full_name="Adin Hodovic",
role="Software engineer",
font="lato",
pdf_generation_visibility="authenticated",
)
site.root_page.add_child(instance=resume)
# Test random page pdf generation
url = f"{reverse('generate_resume_pdf')}?page_id={resume.id}"
request = rf.get(url)
username = "user1"
password = "bar"
user = django_user_model.objects.create_user(username=username, password=password)
request.user = user
res = resume_pdf(request)
print(res)
assert "adin-hodovic" in res["content-disposition"]
assert res.status_code == 200
assert res["content-type"] == "application/pdf"
def test_weasyprint_disabled(client, mocker):
mocker.patch("wagtail_resume.views.HTML")
site = Site.objects.first()
resume = CustomResumePage(
title="Resume",
full_name="Adin Hodovic",
role="Software engineer",
font="lato",
pdf_generation_visibility="never",
)
site.root_page.add_child(instance=resume)
# Test random page pdf generation
url = f"{reverse('generate_resume_pdf')}?page_id={resume.id}"
res = client.get(url)
assert b"<h1>PDF generation is disabled for this resume.</h1>" in res.content
assert res.status_code == 400
def test_weasyprint_with_no_page_id(client, mocker):
mocker.patch("wagtail_resume.views.HTML")
site = Site.objects.first()
resume = CustomResumePage(
title="Resume",
full_name="Adin Hodovic",
role="Software engineer",
font="lato",
)
site.root_page.add_child(instance=resume)
# Test random page pdf generation
url = f"{reverse('generate_resume_pdf')}"
res = client.get(url)
assert b"Missing page id for resume generation" in res.content
assert res.status_code == 400
def test_weasyprint_with_no_number(client, mocker):
mocker.patch("wagtail_resume.views.HTML")
site = Site.objects.first()
resume = CustomResumePage(
title="Resume",
full_name="Adin Hodovic",
role="Software engineer",
font="lato",
)
site.root_page.add_child(instance=resume)
# Test random page pdf generation
url = f"{reverse('generate_resume_pdf')}?page_id={resume.id}'"
res = client.get(url)
assert b"Page id is not a number" in res.content
assert res.status_code == 400
def test_weasyprint_no_resume(client, mocker):
mocker.patch("wagtail_resume.views.HTML")
site = Site.objects.first()
resume = CustomResumePage(
title="Resume",
full_name="Adin Hodovic",
role="Software engineer",
font="lato",
)
site.root_page.add_child(instance=resume)
# Test non existent resume
url = f"{reverse('generate_resume_pdf')}?page_id=9999"
res = client.get(url)
assert res.status_code == 404
def test_weasyprint_logger_warnings_disabled():
logger = logging.getLogger("weasyprint")
assert logger.level == 40
| 31.315476 | 88 | 0.679909 | 661 | 5,261 | 5.248109 | 0.154312 | 0.048717 | 0.046699 | 0.055347 | 0.793312 | 0.787259 | 0.780917 | 0.77198 | 0.762179 | 0.762179 | 0 | 0.007863 | 0.202243 | 5,261 | 167 | 89 | 31.502994 | 0.81868 | 0.047139 | 0 | 0.679104 | 0 | 0 | 0.260592 | 0.117906 | 0 | 0 | 0 | 0 | 0.141791 | 1 | 0.067164 | false | 0.014925 | 0.044776 | 0 | 0.11194 | 0.08209 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
22a4680db02b6c7e142f036f3039c7f703c6cad0 | 6,987 | py | Python | lab3/lab3d_tests.py | tyler274/CS1 | 155fad58f1d714ebd71fa178194711d1ee5dfe20 | [
"MIT"
] | null | null | null | lab3/lab3d_tests.py | tyler274/CS1 | 155fad58f1d714ebd71fa178194711d1ee5dfe20 | [
"MIT"
] | null | null | null | lab3/lab3d_tests.py | tyler274/CS1 | 155fad58f1d714ebd71fa178194711d1ee5dfe20 | [
"MIT"
] | null | null | null | # lab3d_tests.py
import nose
from lab3d import *
SMALL = 1.0e-4
# ----------------------------------------------------------------------
# Data we need for testing.
# ----------------------------------------------------------------------
#
# L-system strings.
#
# Koch snowflake.
k1 = 'F-F++F-F++F-F++F-F++F-F++F-F'
k2 = 'F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F'
k3 = 'F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F-F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F-F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F-F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F-F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F-F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F-F-F++F-F-F-F++F-F++F-F++F-F-F-F++F-F'
#
# Lists of drawing commands.
#
# Koch snowflake.
kc1 = ['F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1']
kc2 = ['F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'L 60', 'F 1', 'R 60', 'R 60', 'F 1', 'L 60', 'F 1']
#
# Bounds.
#
# Koch snowflake.
kb1 = (-6.661338147750939e-16, 3.0, -2.598076211353315, 0.8660254037844386)
kb2 = (-2.220446049250313e-16, 9.0, -7.794228634059945, 2.598076211353316)
#
# Lists of locations.
#
# Koch snowflake.
klocs1 = [(0.0, 0.0, 0.0), (1.0, 0.0, 0.0), (1.0, 0.0, 60.0), (1.5, 0.8660254037844386, 60.0), (1.5, 0.8660254037844386, 0.0), (1.5, 0.8660254037844386, 300.0), (2.0, 0.0, 300.0), (2.0, 0.0, 0.0), (3.0, 0.0, 0.0), (3.0, 0.0, 300.0), (3.0, 0.0, 240.0), (2.4999999999999996, -0.8660254037844384, 240.0), (2.4999999999999996, -0.8660254037844384, 300.0), (2.9999999999999996, -1.732050807568877, 300.0), (2.9999999999999996, -1.732050807568877, 240.0), (2.9999999999999996, -1.732050807568877, 180.0), (1.9999999999999996, -1.7320508075688767, 180.0), (1.9999999999999996, -1.7320508075688767, 240.0), (1.4999999999999991, -2.598076211353315, 240.0), (1.4999999999999991, -2.598076211353315, 180.0), (1.4999999999999991, -2.598076211353315, 120.0), (0.9999999999999993, -1.7320508075688763, 120.0), (0.9999999999999993, -1.7320508075688763, 180.0), (-6.661338147750939e-16, -1.732050807568876, 180.0), (-6.661338147750939e-16, -1.732050807568876, 120.0), (-6.661338147750939e-16, -1.732050807568876, 60.0), (0.49999999999999944, -0.8660254037844375, 60.0), (0.49999999999999944, -0.8660254037844375, 120.0), (-3.3306690738754696e-16, 1.3322676295501878e-15, 120.0)]
# ----------------------------------------------------------------------
# Helper functions.
# ----------------------------------------------------------------------
def floatEquals(f1, f2):
'''
Compare two floats for equality.
'''
return (abs(f1 - f2) < SMALL)
def compareDrawingCommandLists(l1, l2):
'''
Return True if the lists 'l1' and 'l2' contain the same drawing
commands.
'''
if len(l1) != len(l2):
return False
for i, _ in enumerate(l1):
line1 = l1[i].split()
line2 = l2[i].split()
if line1[0] != line2[0]:
return False
for (s1, s2) in zip(line1[1:], line2[1:]):
v1 = float(s1)
v2 = float(s2)
if abs(v1 - v2) >= SMALL:
return False
return True
def compareTuples(t1, t2, l):
'''
Return True if the tuples 't1' and 't2' contain the same values
(within a certain tolerance). 'l' is the expected tuple length.
'''
if len(t1) != len(t2):
return False
if len(t1) != l:
return False
for (x, y) in zip(t1, t2):
if abs(x - y) >= SMALL:
return False
return True
def compareBounds(b1, b2):
'''
Return True if the bounds 'b1' and 'b2' contain the same values
(within a certain tolerance).
'''
return compareTuples(b1, b2, 4)
def compareLocations(l1, l2):
'''
Return True, if locations 'l1' and 'l2' contain the same values
(within a certain tolerance).
'''
return compareTuples(l1, l2, 3)
def allLocations(cmds):
'''
Return a list of all the locations/angles encountered while
executing a list of commands.
'''
loc = (0.0, 0.0, 0.0)
locs = [loc]
for cmd in cmds:
next_loc = nextLocation(loc[0], loc[1], loc[2], cmd)
locs.append(next_loc)
loc = next_loc
return locs
def compareListOfLocations(cmds1, cmds2):
if len(cmds1) != len(cmds2):
return False
for l1, l2 in zip(cmds1, cmds2):
if not compareLocations(l1, l2):
return False
return True
# ----------------------------------------------------------------------
# The tests themselves.
# ----------------------------------------------------------------------
def test_update():
assert update(koch, koch['start']) == k1
assert update(koch, k1) == k2
assert update(koch, k2) == k3
def test_iterate():
assert iterate(koch, 1) == k1
assert iterate(koch, 2) == k2
assert iterate(koch, 3) == k3
def test_lsystemToDrawingCommands():
assert compareDrawingCommandLists(
lsystemToDrawingCommands(koch_draw, iterate(koch, 1)), kc1)
assert compareDrawingCommandLists(
lsystemToDrawingCommands(koch_draw, iterate(koch, 2)), kc2)
def test_bounds():
assert compareBounds(bounds(kc1), kb1)
assert compareBounds(bounds(kc2), kb2)
def test_nextLocation():
assert compareLocations(nextLocation(0.0, 0.0, 0.0, 'F 1'),
(1.0, 0.0, 0.0))
assert compareLocations(nextLocation(0.0, 0.0, 0.0, 'L 10'),
(0.0, 0.0, 10.0))
assert compareLocations(nextLocation(0.0, 0.0, 0.0, 'R 10'),
(0.0, 0.0, 350.0))
assert compareListOfLocations(klocs1, allLocations(kc1))
def test_saveDrawing():
saveDrawing('koch_2', kb2, kc2)
f = open('koch_2', 'r')
lines = f.readlines()
f.close()
b2 = tuple(map(float, lines[0].split()))
assert compareBounds(b2, kb2)
assert compareDrawingCommandLists(lines[1:], kc2)
if __name__ == '__main__':
nose.runmodule()
| 41.589286 | 1,162 | 0.527694 | 1,211 | 6,987 | 3.025599 | 0.132122 | 0.135917 | 0.201419 | 0.265284 | 0.530022 | 0.491812 | 0.316321 | 0.254094 | 0.242358 | 0.223253 | 0 | 0.228789 | 0.193645 | 6,987 | 167 | 1,163 | 41.838323 | 0.421548 | 0.166452 | 0 | 0.146067 | 0 | 0.033708 | 0.197889 | 0.10343 | 0 | 0 | 0 | 0 | 0.179775 | 1 | 0.146067 | false | 0 | 0.022472 | 0 | 0.337079 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
22bfde7b71ad6ac11153c93ce5bd1f4ff7c47859 | 127 | py | Python | python/missingNumber.py | caleberi/LeetCode | fa170244648f73e76d316a6d7fc0e813adccaa82 | [
"MIT"
] | 1 | 2021-08-10T20:00:24.000Z | 2021-08-10T20:00:24.000Z | python/missingNumber.py | caleberi/LeetCode | fa170244648f73e76d316a6d7fc0e813adccaa82 | [
"MIT"
] | null | null | null | python/missingNumber.py | caleberi/LeetCode | fa170244648f73e76d316a6d7fc0e813adccaa82 | [
"MIT"
] | 3 | 2021-06-11T11:56:39.000Z | 2021-08-10T08:50:49.000Z | class Solution:
def missingNumber(self, nums: List[int]) -> int:
return int((len(nums)*(len(nums)+1)/2)-sum(nums))
| 31.75 | 57 | 0.622047 | 19 | 127 | 4.157895 | 0.684211 | 0.177215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019048 | 0.173228 | 127 | 3 | 58 | 42.333333 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
22f30c346fb55d22d67de624ddf1a819e2a5b82b | 165 | py | Python | http1/__init__.py | c4s4/http1 | ab2610823f060632227f9ca60e98320800b5c5be | [
"Apache-2.0"
] | 1 | 2019-11-30T14:24:25.000Z | 2019-11-30T14:24:25.000Z | http1/__init__.py | c4s4/http1 | ab2610823f060632227f9ca60e98320800b5c5be | [
"Apache-2.0"
] | 2 | 2015-04-25T08:14:49.000Z | 2015-04-26T09:08:08.000Z | http1/__init__.py | c4s4/http1 | ab2610823f060632227f9ca60e98320800b5c5be | [
"Apache-2.0"
] | 1 | 2015-04-25T09:12:59.000Z | 2015-04-25T09:12:59.000Z | #!/usr/bin/env python
# encoding: UTF-8
from http1.http1 import request, Response, TooManyRedirectsException, get, head, post, put, delete, connect, options, trace
| 33 | 123 | 0.763636 | 22 | 165 | 5.727273 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02069 | 0.121212 | 165 | 4 | 124 | 41.25 | 0.848276 | 0.218182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
fe0a3bbb48b2015af1f91e79d56560e1dc9830e7 | 188 | py | Python | app/helper.py | yangshun/cs4243-project | b41af28ab27fc9ec0993a98d91e8f05616949500 | [
"MIT"
] | 3 | 2021-03-08T17:32:08.000Z | 2021-06-15T13:05:45.000Z | app/helper.py | yangshun/cs4243-project | b41af28ab27fc9ec0993a98d91e8f05616949500 | [
"MIT"
] | null | null | null | app/helper.py | yangshun/cs4243-project | b41af28ab27fc9ec0993a98d91e8f05616949500 | [
"MIT"
] | 1 | 2020-03-14T22:50:44.000Z | 2020-03-14T22:50:44.000Z |
MAX_FLOAT32_COORD = 1e11
def box_coord(a):
if a > MAX_FLOAT32_COORD:
return MAX_FLOAT32_COORD
elif a < -MAX_FLOAT32_COORD:
return -MAX_FLOAT32_COORD
return a | 18.8 | 33 | 0.68617 | 28 | 188 | 4.214286 | 0.357143 | 0.423729 | 0.635593 | 0.533898 | 0.627119 | 0.627119 | 0.627119 | 0.627119 | 0 | 0 | 0 | 0.093525 | 0.260638 | 188 | 10 | 34 | 18.8 | 0.755396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
fe0d270f02b564bdc909388e35eec51b1c1dead2 | 704 | py | Python | clearml/automation/__init__.py | allegroai/clearml | 0bc43a31f49702077f91ffca2a36d0cb00d7e1a5 | [
"Apache-2.0"
] | 1,118 | 2020-12-23T09:28:43.000Z | 2022-03-31T14:22:31.000Z | clearml/automation/__init__.py | allegroai/clearml | 0bc43a31f49702077f91ffca2a36d0cb00d7e1a5 | [
"Apache-2.0"
] | 347 | 2020-12-23T22:38:48.000Z | 2022-03-31T20:01:06.000Z | clearml/automation/__init__.py | allegroai/clearml | 0bc43a31f49702077f91ffca2a36d0cb00d7e1a5 | [
"Apache-2.0"
] | 228 | 2020-12-23T14:44:51.000Z | 2022-03-27T08:56:48.000Z | from .parameters import (UniformParameterRange, DiscreteParameterRange, UniformIntegerParameterRange, ParameterSet,
LogUniformParameterRange)
from .optimization import GridSearch, RandomSearch, HyperParameterOptimizer, Objective
from .job import ClearmlJob
from .controller import PipelineController
from .scheduler import TaskScheduler
from .trigger import TriggerScheduler
__all__ = ["UniformParameterRange", "DiscreteParameterRange", "UniformIntegerParameterRange", "ParameterSet",
"LogUniformParameterRange", "GridSearch", "RandomSearch", "HyperParameterOptimizer", "Objective",
"ClearmlJob", "PipelineController", "TaskScheduler", "TriggerScheduler"]
| 58.666667 | 115 | 0.786932 | 45 | 704 | 12.222222 | 0.488889 | 0.156364 | 0.258182 | 0.301818 | 0.389091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133523 | 704 | 11 | 116 | 64 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0.309659 | 0.167614 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a3d84d6384d4bca7b25d9cf304485dc9a745acfd | 35 | py | Python | 8393/8393.py3.py | isac322/BOJ | 35959dd1a63d75ebca9ed606051f7a649d5c0c7b | [
"MIT"
] | 14 | 2017-05-02T02:00:42.000Z | 2021-11-16T07:25:29.000Z | 8393/8393.py3.py | isac322/BOJ | 35959dd1a63d75ebca9ed606051f7a649d5c0c7b | [
"MIT"
] | 1 | 2017-12-25T14:18:14.000Z | 2018-02-07T06:49:44.000Z | 8393/8393.py3.py | isac322/BOJ | 35959dd1a63d75ebca9ed606051f7a649d5c0c7b | [
"MIT"
] | 9 | 2016-03-03T22:06:52.000Z | 2020-04-30T22:06:24.000Z | print(sum(range(int(input()) + 1))) | 35 | 35 | 0.628571 | 6 | 35 | 3.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.057143 | 35 | 1 | 35 | 35 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
a3e59ad072513f39bb11e1aa214c1e8d520e0025 | 674 | py | Python | actions/__init__.py | fallcat/synst | 0fa4adffa825af4a62b6e739b59c4125a7b6698e | [
"BSD-3-Clause"
] | 1 | 2019-09-08T13:55:21.000Z | 2019-09-08T13:55:21.000Z | actions/__init__.py | fallcat/synst | 0fa4adffa825af4a62b6e739b59c4125a7b6698e | [
"BSD-3-Clause"
] | 2 | 2019-10-02T15:23:55.000Z | 2019-10-16T02:38:25.000Z | actions/__init__.py | fallcat/synst | 0fa4adffa825af4a62b6e739b59c4125a7b6698e | [
"BSD-3-Clause"
] | null | null | null | '''
Initialize the actions module
'''
from actions.train import Trainer
from actions.evaluate import Evaluator
from actions.translate import Translator
from actions.probe import Prober
from actions.probe_train import ProbeTrainer
from actions.probe_evaluate import ProbeEvaluator
from actions.probe_new_translate import ProbeNewTranslator
from actions.probe_attn_stats import ProbeStatsGetter
from actions.probe_off_diagonal import ProbeOffDiagonal
class Pass(object):
''' Action that does nothing... '''
def __init__(self, *args, **kwargs):
''' Do nothing '''
pass
def __call__(self, *args, **kwargs):
''' Do nothing '''
pass
| 28.083333 | 58 | 0.747774 | 80 | 674 | 6.1 | 0.475 | 0.202869 | 0.196721 | 0.065574 | 0.110656 | 0.110656 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170623 | 674 | 23 | 59 | 29.304348 | 0.872987 | 0.121662 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.214286 | 0.642857 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
432b51eddb77eca1d54b0538a17e5450c9bda9eb | 235 | py | Python | python/testData/inspections/PyTypeCheckerInspection/CallOperator.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/inspections/PyTypeCheckerInspection/CallOperator.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/inspections/PyTypeCheckerInspection/CallOperator.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | class Foo:
def __call__(self, arg: int):
return arg
bar = Foo()
bar.__call__(<warning descr="Expected type 'int', got 'str' instead">"s"</warning>)
bar(<warning descr="Expected type 'int', got 'str' instead">"s"</warning>) | 33.571429 | 83 | 0.655319 | 34 | 235 | 4.294118 | 0.5 | 0.164384 | 0.273973 | 0.328767 | 0.657534 | 0.657534 | 0.657534 | 0.657534 | 0.657534 | 0.657534 | 0 | 0 | 0.157447 | 235 | 7 | 84 | 33.571429 | 0.737374 | 0 | 0 | 0 | 0 | 0 | 0.330508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4a31b62c5d1800a565b4caefc2706dfedca80a34 | 53 | py | Python | softlearning/environments/gym/wrappers/__init__.py | pedrodbs/mbpo | 42c3be5eca050116c0cbada91588184e97af7c12 | [
"MIT"
] | 362 | 2019-04-16T22:45:21.000Z | 2022-03-30T06:13:22.000Z | softlearning/environments/gym/wrappers/__init__.py | pedrodbs/mbpo | 42c3be5eca050116c0cbada91588184e97af7c12 | [
"MIT"
] | 39 | 2019-05-03T04:21:14.000Z | 2022-03-11T23:45:03.000Z | softlearning/environments/gym/wrappers/__init__.py | pedrodbs/mbpo | 42c3be5eca050116c0cbada91588184e97af7c12 | [
"MIT"
] | 67 | 2019-04-17T03:35:29.000Z | 2021-12-26T05:39:37.000Z | from .normalize_action import NormalizeActionWrapper
| 26.5 | 52 | 0.90566 | 5 | 53 | 9.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 53 | 1 | 53 | 53 | 0.959184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4a37f192cd1f291208d745d53286211df639ed45 | 129 | py | Python | mysite/plans/admin.py | tenderghost/FlyPersonalAssistant | f9b379a42c32ff1ea73803d25cce7be04f8ec497 | [
"MIT"
] | 1 | 2018-01-07T16:45:31.000Z | 2018-01-07T16:45:31.000Z | mysite/plans/admin.py | tenderghost/FlyPersonalAssistant | f9b379a42c32ff1ea73803d25cce7be04f8ec497 | [
"MIT"
] | null | null | null | mysite/plans/admin.py | tenderghost/FlyPersonalAssistant | f9b379a42c32ff1ea73803d25cce7be04f8ec497 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Plan, PlanChange
admin.site.register(Plan)
admin.site.register(PlanChange) | 21.5 | 36 | 0.821705 | 18 | 129 | 5.888889 | 0.555556 | 0.169811 | 0.320755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 129 | 6 | 37 | 21.5 | 0.905983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4a3bcc70aa3d8d9869d2f4b2e945e0ba0dba767f | 463 | py | Python | CURSO PYTHON/pythonsexercicios/ex006.py | Sabrinaparussoli/PYTHON | 77436608ffd799e9e2bbe4fa5084443fb7382793 | [
"MIT"
] | null | null | null | CURSO PYTHON/pythonsexercicios/ex006.py | Sabrinaparussoli/PYTHON | 77436608ffd799e9e2bbe4fa5084443fb7382793 | [
"MIT"
] | null | null | null | CURSO PYTHON/pythonsexercicios/ex006.py | Sabrinaparussoli/PYTHON | 77436608ffd799e9e2bbe4fa5084443fb7382793 | [
"MIT"
] | null | null | null | n = int(input('Digite um numero: '))
d = n * 2
t = n * 3
r = n**(1/2)
print('O dobro do valor de {} é: {}'.format(n, d))
print('O triplo do valor de {} é: {}'.format(n, t))
print('A raiz do valor de {} é: {:.2f}'.format(n, r))
# outra possibilidade
n = int(input('Digite um numero: '))
print('O dobro do valor de {} é: {}'.format(n, (n*2)))
print('O triplo do valor de {} é: {}'.format(n, (n*3)))
print('A raiz do valor de {} é: {:.2f}'.format(n, pow(n,(1/2)))) | 33.071429 | 64 | 0.559395 | 91 | 463 | 2.846154 | 0.296703 | 0.162162 | 0.208494 | 0.23166 | 0.849421 | 0.849421 | 0.671815 | 0.664093 | 0.664093 | 0.223938 | 0 | 0.026316 | 0.179266 | 463 | 14 | 64 | 33.071429 | 0.655263 | 0.041037 | 0 | 0.181818 | 0 | 0 | 0.478555 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
4a3ef977fd68e95cd023de83f2029d043f28d410 | 40 | py | Python | gwaripper_webGUI/__main__.py | nilfoer/gwaripper | 28492b9894973633612471094d24907b2bc47728 | [
"MIT"
] | 6 | 2021-03-12T08:57:18.000Z | 2022-03-27T00:28:17.000Z | gwaripper_webGUI/__main__.py | nilfoer/gwaripper | 28492b9894973633612471094d24907b2bc47728 | [
"MIT"
] | 1 | 2020-10-05T04:25:53.000Z | 2020-10-05T14:20:07.000Z | gwaripper_webGUI/__main__.py | nilfoer/gwaripper | 28492b9894973633612471094d24907b2bc47728 | [
"MIT"
] | 2 | 2021-03-12T11:05:46.000Z | 2021-09-12T22:53:58.000Z | from .start_webgui import main
main()
| 8 | 30 | 0.75 | 6 | 40 | 4.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 40 | 4 | 31 | 10 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4a4274153902b41236931c0b9f759e5228dbe250 | 176 | py | Python | myvenv/lib/python3.5/site-packages/allauth/socialaccount/providers/digitalocean/urls.py | tuvapp/tuvappcom | 5ca2be19f4b0c86a1d4a9553711a4da9d3f32841 | [
"MIT"
] | 1 | 2016-12-22T18:40:40.000Z | 2016-12-22T18:40:40.000Z | myvenv/lib/python3.5/site-packages/allauth/socialaccount/providers/digitalocean/urls.py | tuvapp/tuvappcom | 5ca2be19f4b0c86a1d4a9553711a4da9d3f32841 | [
"MIT"
] | 6 | 2020-06-05T18:44:19.000Z | 2022-01-13T00:48:56.000Z | myvenv/lib/python3.5/site-packages/allauth/socialaccount/providers/digitalocean/urls.py | tuvapp/tuvappcom | 5ca2be19f4b0c86a1d4a9553711a4da9d3f32841 | [
"MIT"
] | 1 | 2022-02-01T17:19:28.000Z | 2022-02-01T17:19:28.000Z | from allauth.socialaccount.providers.oauth2.urls import default_urlpatterns
from .provider import DigitalOceanProvider
urlpatterns = default_urlpatterns(DigitalOceanProvider)
| 35.2 | 75 | 0.886364 | 17 | 176 | 9.058824 | 0.647059 | 0.233766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006098 | 0.068182 | 176 | 4 | 76 | 44 | 0.932927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4a98404bb1c711b53f7122eefc153fe6a05754ba | 131 | py | Python | tests/test_mkpy.py | wenshuin/mkpy | 52d22b9bac50eede794bacd756869b1600b71ec0 | [
"BSD-3-Clause"
] | null | null | null | tests/test_mkpy.py | wenshuin/mkpy | 52d22b9bac50eede794bacd756869b1600b71ec0 | [
"BSD-3-Clause"
] | 25 | 2019-09-29T22:35:34.000Z | 2020-12-18T01:05:20.000Z | tests/test_mkpy.py | wenshuin/mkpy | 52d22b9bac50eede794bacd756869b1600b71ec0 | [
"BSD-3-Clause"
] | 1 | 2020-09-28T23:32:31.000Z | 2020-09-28T23:32:31.000Z | from pathlib import Path
import mkpy
from mkpy import get_ver
def test_get_ver():
# screen version for updates
get_ver()
| 14.555556 | 32 | 0.740458 | 21 | 131 | 4.428571 | 0.619048 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21374 | 131 | 8 | 33 | 16.375 | 0.902913 | 0.198473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4aa053a50844fbe4dd5766a2b51b5949a397271d | 5,486 | py | Python | python/oneroll/who_owe.py | exposit/katamoiran | f5ad354c102e3e24a666777a1a934fe79b73aea8 | [
"BSD-Source-Code"
] | 6 | 2017-10-09T05:07:50.000Z | 2019-07-11T05:10:15.000Z | python/oneroll/who_owe.py | exposit/katamoiran | f5ad354c102e3e24a666777a1a934fe79b73aea8 | [
"BSD-Source-Code"
] | 1 | 2018-09-18T13:40:55.000Z | 2018-11-22T16:41:04.000Z | python/oneroll/who_owe.py | exposit/katamoiran | f5ad354c102e3e24a666777a1a934fe79b73aea8 | [
"BSD-Source-Code"
] | 2 | 2017-11-05T19:50:28.000Z | 2019-03-10T08:35:50.000Z | import random
# not fancy; just goes down the list and prints each choice, handling the exceptions in a minimal way
roll = []
print("\nYou owe the money to...")
roll.append(random.randint(0,11))
print(["1. a sibling or close relative", "2. a noble", "3. a government body", "4. a church", "5. a gangster", "6. an outcast or monster", "7. a warlord or barbarian", "8. a wizard", "9. a thief", "10. a merchant", "11. a spymaster", "12. a courtesan"][roll[-1]])
print("\nYou owe the money because...")
roll.append(random.randint(0,7))
print(["1. of gambling", "2. of an expensive vice", "3. of someone else", "4. of a joke or prank gone wrong", "5. they trusted you and you messed up", "6. you trusted a pretty face", "7. you stole it and they know", "8. it wasn't your fault but they hold you accountable anyway"][roll[-1]])
print("\nIf you don't pay up they'll...")
roll.append(random.randint(0,5))
print(["1. take it out of your hide (possibly not fatal but you like both of your arms)", "2. make an example out of you (fatal and messy)", "3. have a legal claim to something you value", "4. take everything, including your person", "5. send a really skilled assassin out looking for you", "6. hurt people you care about"][roll[-1]])
print("\nYou have...")
roll.append(random.randint(0,3))
print(["1. until they notice you're not dead yet", "2. until you want to go back to your home town", "3. as long as you can keep dealing with the hired thugs they keep sending", "4. a bargain buying you time but you'd better deliver"][roll[-1]])
print("\nYou know that one path to freedom is to...")
roll.append(random.randint(0,9))
print(["1. retrieve something the one you owe values higher than the debt", "2. confront the one you owe directly", "3. pay it back in full", "4. convince someone to sacrifice for you", "5. flee far enough that they can't reach you", "6. find a powerful patron", "7. find a powerful item of protection", "8. turn the tables on them, but you're going to need allies", "9. lay low, attracting no attention, for a long, long time", "10. become someone else"][roll[-1]])
print("\nFinally, there's a wrinkle.")
chart = ["1. Everybody you used to know knows you're toxic now.", "2. There's a price out on your head and even if you satisfy the debt it'll take time for word to get out.", "3. The law is looking for you but it's a total frame job.", "4. The law is looking for you, and you can totally explain...", "5. You did something terrible that you never thought you were capable of to escape.", "6. Someone you cared about betrayed you to them but you escaped.", "7. They actually killed you, or near enough, and think you're (still) dead.", "8. One of your former lovers or allies is working for them and they know you *very* well.", "9. They have a lackey who isn't directly dangerous, but follows you from place to place making sure everyone knows you owe.", "10. An ancestral spirit only you can see is attached to you. It's disappointed with you and finds everything about you depressing and repugnant and tells you what a failure you are whenever it thinks of it, which is all the time.", "11. You're cursed and slowly wasting away. Only collecting towards repayment of your debt eases the symptoms.", "12. You're cursed; at night your Shadow possesses your body and makes it do things you'd never do normally.", "13. The debt has a component you can't repay with money, only with blood -- or perhaps not at all.", "14. The person you owe money to will forgive the debt if you do something for them that is a. horrifying, b. extremely dangerous, c. illegal. Choose two.", "15. An innocent paid a terrible price when you incurred the debt and their family will do something about it. Roll again on the d6 chart and on the d4 chart to see what.", "16. Someone who didn't deserve it was ruined or suffered terribly because of your actions or lack of action.", "17. You're wracked with terrible nightmares about the money and you don't know why.", "18. You've been framed for the most heinous crime you can think of -- not just larceny or petty theft. The person you owe is just that vindictive.", "19. Roll until you have two results and combine them.", "20. You actually owe money to 1+1d4 different groups. Roll a d4, then roll up that many more groups."]
roll.append(random.randint(0,19))
wrinkle = chart[roll[-1]]
if "19" in wrinkle:
chart = chart[:-2]
wrinkle = " AND ".join(random.sample(chart,2))
print(wrinkle)
print("\nThe amount you owe is %s" % ((max(roll) + 1) * 1000))
print("\nYou owe something more difficult to repay than mere money. You owe...")
s = min(roll)
roll.remove(s)
ss = min(item for item in roll if item >= s)
total = s + ss
print(["2. A favor.", "3. A life. Yours, perhaps, or maybe a sacrificial victim's. Or a fellow debtor you could turn in...", "4. A rare spell component that's going to be hell to replace.", "5. A rare antique in a specific style.", "6. A rare animal. Don't even ask.", "7. A magic effect or power that you usurped from the expected owner. You can do something extraordinary now but have no control over it. And they're mad they lost out.", "8. An inconvenient magical animal that has imprinted on you instead of the intended owner (or died in your care if you don't want a pet).", "9. Your skin. Part of a powerful ritual is inked indelibly on your skin. You're the last copy.", "10. A powerful magical item, now bonded to you. Not necessarily a useful item, just a powerful one."][total])
| 119.26087 | 2,153 | 0.718556 | 1,010 | 5,486 | 3.90297 | 0.385149 | 0.010147 | 0.024353 | 0.035008 | 0.058346 | 0.023846 | 0 | 0 | 0 | 0 | 0 | 0.026455 | 0.173168 | 5,486 | 45 | 2,154 | 121.911111 | 0.842593 | 0.018046 | 0 | 0 | 0 | 0.5625 | 0.823398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.03125 | 0 | 0.03125 | 0.46875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
4aa128ee254d15024fc9297beafbaa9cc6e9ae4c | 35 | py | Python | kd_common/kd_common/google/__init__.py | konovalovdmitry/catsnap | d5f1d7c37dcee1ad3fee2cdc12a3b44b56f4c63f | [
"MIT"
] | null | null | null | kd_common/kd_common/google/__init__.py | konovalovdmitry/catsnap | d5f1d7c37dcee1ad3fee2cdc12a3b44b56f4c63f | [
"MIT"
] | null | null | null | kd_common/kd_common/google/__init__.py | konovalovdmitry/catsnap | d5f1d7c37dcee1ad3fee2cdc12a3b44b56f4c63f | [
"MIT"
] | 1 | 2021-09-30T08:06:20.000Z | 2021-09-30T08:06:20.000Z | from kd_common.google import sheet
| 17.5 | 34 | 0.857143 | 6 | 35 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
43508d97ec7d8908f8fa6d7f29824c472e27b8c9 | 163 | py | Python | rlp/sedes/__init__.py | vaporyco/pyrlp | bdef65842a310d610e277096b403d284999ecbaa | [
"MIT"
] | null | null | null | rlp/sedes/__init__.py | vaporyco/pyrlp | bdef65842a310d610e277096b403d284999ecbaa | [
"MIT"
] | null | null | null | rlp/sedes/__init__.py | vaporyco/pyrlp | bdef65842a310d610e277096b403d284999ecbaa | [
"MIT"
] | null | null | null | from . import raw
from .binary import Binary, binary
from .big_endian_int import BigEndianInt, big_endian_int
from .lists import CountableList, List, Serializable
| 32.6 | 56 | 0.828221 | 23 | 163 | 5.695652 | 0.521739 | 0.137405 | 0.183206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122699 | 163 | 4 | 57 | 40.75 | 0.916084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
43969d63b05383ba59d781e69732a657e9c1ae73 | 132 | py | Python | qcfractal/storage_sockets/__init__.py | ChayaSt/QCFractal | 2d3c737b0e755d6e5bac743a0beb0714b5a92d0b | [
"BSD-3-Clause"
] | null | null | null | qcfractal/storage_sockets/__init__.py | ChayaSt/QCFractal | 2d3c737b0e755d6e5bac743a0beb0714b5a92d0b | [
"BSD-3-Clause"
] | null | null | null | qcfractal/storage_sockets/__init__.py | ChayaSt/QCFractal | 2d3c737b0e755d6e5bac743a0beb0714b5a92d0b | [
"BSD-3-Clause"
] | null | null | null | """
Importer for the DB socket class.
"""
__all__ = ["storage_socket_factory"]
from .storage_socket import storage_socket_factory
| 16.5 | 50 | 0.772727 | 17 | 132 | 5.470588 | 0.647059 | 0.419355 | 0.430108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128788 | 132 | 7 | 51 | 18.857143 | 0.808696 | 0.25 | 0 | 0 | 0 | 0 | 0.241758 | 0.241758 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4398f8355c06457343cda94fd4d31dfc0bebe099 | 462 | py | Python | boards/tasks.py | oscarsiles/jotlet | 361f7ad0d32ea96d012020a67493931482207036 | [
"BSD-3-Clause"
] | null | null | null | boards/tasks.py | oscarsiles/jotlet | 361f7ad0d32ea96d012020a67493931482207036 | [
"BSD-3-Clause"
] | 2 | 2022-03-21T22:22:33.000Z | 2022-03-28T22:18:33.000Z | boards/tasks.py | oscarsiles/jotlet | 361f7ad0d32ea96d012020a67493931482207036 | [
"BSD-3-Clause"
] | null | null | null | from django.core import management
def create_thumbnails(img):
img.get_webp
img.get_thumbnail
img.get_thumbnail_webp
def thumbnail_cleanup_command():
return management.call_command("thumbnail", "cleanup")
def history_clean_duplicates_past_hour_command():
return management.call_command("clean_duplicate_history", "-m", "60", "--auto")
def history_clean_old_command():
return management.call_command("clean_old_history", "--auto")
| 23.1 | 83 | 0.764069 | 59 | 462 | 5.610169 | 0.440678 | 0.054381 | 0.208459 | 0.244713 | 0.338369 | 0.23565 | 0 | 0 | 0 | 0 | 0 | 0.004951 | 0.125541 | 462 | 19 | 84 | 24.315789 | 0.814356 | 0 | 0 | 0 | 0 | 0 | 0.155844 | 0.049784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0.090909 | 0.272727 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
78e0d8373f70c637a9b34a5bfcc6c1993f18efd2 | 65 | py | Python | godaddypy/exceptions/__init__.py | avitko001c/godaddypy | c5bd91e414cb4831e57fa3bf310d639df29ed4e7 | [
"BSD-3-Clause"
] | null | null | null | godaddypy/exceptions/__init__.py | avitko001c/godaddypy | c5bd91e414cb4831e57fa3bf310d639df29ed4e7 | [
"BSD-3-Clause"
] | null | null | null | godaddypy/exceptions/__init__.py | avitko001c/godaddypy | c5bd91e414cb4831e57fa3bf310d639df29ed4e7 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import absolute_import
from . import exceptions
| 16.25 | 38 | 0.846154 | 8 | 65 | 6.25 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138462 | 65 | 3 | 39 | 21.666667 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
604b0ef043ea017e5c80f2df3a559ba9f126fc7f | 236 | py | Python | shops/models.py | Jackintoshh/Webmappingtest | 71a1aca53ed048dde75b4b673f707680c7f9d551 | [
"MIT"
] | null | null | null | shops/models.py | Jackintoshh/Webmappingtest | 71a1aca53ed048dde75b4b673f707680c7f9d551 | [
"MIT"
] | 7 | 2020-02-12T03:09:10.000Z | 2022-02-10T11:15:31.000Z | shops/models.py | Jackintoshh/Webmappingtest | 71a1aca53ed048dde75b4b673f707680c7f9d551 | [
"MIT"
] | null | null | null | from django.contrib.gis.db import models
class Shop(models.Model):
name = models.CharField(max_length=100)
location = models.PointField()
address = models.CharField(max_length=100)
city = models.CharField(max_length=50) | 33.714286 | 46 | 0.745763 | 32 | 236 | 5.40625 | 0.625 | 0.260116 | 0.312139 | 0.416185 | 0.312139 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039604 | 0.144068 | 236 | 7 | 47 | 33.714286 | 0.816832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
605e68508b2617bdf5352354dd5821d35ced1f48 | 210 | py | Python | avaland/sources/__init__.py | PSD79/avaland | 142547e48b1728db6efe8a6b9f02af18a1b42bc5 | [
"MIT"
] | 27 | 2020-05-12T22:02:57.000Z | 2021-07-27T10:53:24.000Z | avaland/sources/__init__.py | PSD79/avaland | 142547e48b1728db6efe8a6b9f02af18a1b42bc5 | [
"MIT"
] | null | null | null | avaland/sources/__init__.py | PSD79/avaland | 142547e48b1728db6efe8a6b9f02af18a1b42bc5 | [
"MIT"
] | 2 | 2020-05-13T18:40:03.000Z | 2020-05-14T15:01:07.000Z | from .bia2 import Bia2
from .navahang import Navahang
from .nex1music import Nex1
from .radiojavan import RadioJavan
from .rapfarsi import RapFarsi
from .wikiseda import WikiSeda
from .mrtehran import MrTehran
| 26.25 | 34 | 0.833333 | 28 | 210 | 6.25 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 0.133333 | 210 | 7 | 35 | 30 | 0.93956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
6064f778529b5f669dec3ada49cfea9c818c27a2 | 139 | py | Python | tests/gnn/test_combine.py | BatsResearch/zsl-kg | 9bc4d4537a0f90ee3bbcefdf90ceae6dbcf48572 | [
"Apache-2.0"
] | 83 | 2021-08-30T02:50:37.000Z | 2022-02-22T09:37:36.000Z | tests/gnn/test_combine.py | BatsResearch/zsl-kg | 9bc4d4537a0f90ee3bbcefdf90ceae6dbcf48572 | [
"Apache-2.0"
] | 2 | 2021-09-10T08:44:13.000Z | 2022-01-23T17:33:35.000Z | tests/gnn/test_combine.py | BatsResearch/zsl-kg | 9bc4d4537a0f90ee3bbcefdf90ceae6dbcf48572 | [
"Apache-2.0"
] | 6 | 2021-09-10T07:09:41.000Z | 2021-11-07T14:31:33.000Z | """
Files contains tests for the combine classes
"""
import unittest
from zsl_kg.gnn.combine import Combine, RCGNCombine, AttentionCombine
| 23.166667 | 69 | 0.805755 | 18 | 139 | 6.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122302 | 139 | 5 | 70 | 27.8 | 0.909836 | 0.316547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
60a82b6266af795737cd9b74fdeff78ac13477ca | 97 | py | Python | training_codes/biophys2lifmodel_lr/run_lr2_g8_8_test500ms_inh_lif_syn_z104.py | zqwei/LIF_Vis_model | 16f651ac827ba5f0feb40a0e619e600f1251d009 | [
"MIT"
] | null | null | null | training_codes/biophys2lifmodel_lr/run_lr2_g8_8_test500ms_inh_lif_syn_z104.py | zqwei/LIF_Vis_model | 16f651ac827ba5f0feb40a0e619e600f1251d009 | [
"MIT"
] | null | null | null | training_codes/biophys2lifmodel_lr/run_lr2_g8_8_test500ms_inh_lif_syn_z104.py | zqwei/LIF_Vis_model | 16f651ac827ba5f0feb40a0e619e600f1251d009 | [
"MIT"
] | null | null | null | import start0 as start
start.run_simulation('config_lr2_g8_8_test500ms_inh_lif_syn_z104.json')
| 19.4 | 71 | 0.865979 | 17 | 97 | 4.411765 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.072165 | 97 | 4 | 72 | 24.25 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0.489583 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
60cb70fa676e537ad2b8dc7e1d2d9c3ab2e3e9cb | 1,280 | py | Python | polog/tests/handlers/file/rotation/rules/rules/tokenization/tokens/test_size_token.py | pomponchik/polog | 104c5068a65b0eaeab59327aac1a583e2606e77e | [
"MIT"
] | 30 | 2020-07-16T16:52:46.000Z | 2022-03-24T16:56:29.000Z | polog/tests/handlers/file/rotation/rules/rules/tokenization/tokens/test_size_token.py | pomponchik/polog | 104c5068a65b0eaeab59327aac1a583e2606e77e | [
"MIT"
] | 6 | 2021-02-07T22:08:01.000Z | 2021-12-07T21:56:46.000Z | polog/tests/handlers/file/rotation/rules/rules/tokenization/tokens/test_size_token.py | pomponchik/polog | 104c5068a65b0eaeab59327aac1a583e2606e77e | [
"MIT"
] | 4 | 2020-12-22T07:05:34.000Z | 2022-03-24T16:56:50.000Z | import pytest
from polog.handlers.file.rotation.rules.rules.tokenization.tokens.size_token import SizeToken
def test_content_extraction_for_size_token():
"""
Проверяем, что значение из строки извлекается корректно.
"""
assert SizeToken('b').content == 1
assert SizeToken('kb').content == 1024
assert SizeToken('mb').content == 1024 * 1024
assert SizeToken('gb').content == 1024 * 1024 * 1024
assert SizeToken('tb').content == 1024 * 1024 * 1024 * 1024
assert SizeToken('pb').content == 1024 * 1024 * 1024 * 1024 * 1024
assert SizeToken('byte').content == 1
assert SizeToken('kilobyte').content == 1024
assert SizeToken('megabyte').content == 1024 * 1024
assert SizeToken('gigabyte').content == 1024 * 1024 * 1024
assert SizeToken('terabyte').content == 1024 * 1024 * 1024 * 1024
assert SizeToken('petabyte').content == 1024 * 1024 * 1024 * 1024 * 1024
assert SizeToken('bytes').content == 1
assert SizeToken('kilobytes').content == 1024
assert SizeToken('megabytes').content == 1024 * 1024
assert SizeToken('gigabytes').content == 1024 * 1024 * 1024
assert SizeToken('terabytes').content == 1024 * 1024 * 1024 * 1024
assert SizeToken('petabytes').content == 1024 * 1024 * 1024 * 1024 * 1024
| 42.666667 | 93 | 0.677344 | 149 | 1,280 | 5.778523 | 0.308725 | 0.278746 | 0.250871 | 0.293844 | 0.484321 | 0.379791 | 0.229965 | 0.097561 | 0 | 0 | 0 | 0.175624 | 0.185938 | 1,280 | 29 | 94 | 44.137931 | 0.650672 | 0.04375 | 0 | 0 | 0 | 0 | 0.086921 | 0 | 0 | 0 | 0 | 0 | 0.857143 | 1 | 0.047619 | true | 0 | 0.095238 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.