hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0ff8766945091b46b984bb506749080175822e2d | 122 | py | Python | im/kibot/data/load/__init__.py | ajmal017/amp | 8de7e3b88be87605ec3bad03c139ac64eb460e5c | [
"BSD-3-Clause"
] | null | null | null | im/kibot/data/load/__init__.py | ajmal017/amp | 8de7e3b88be87605ec3bad03c139ac64eb460e5c | [
"BSD-3-Clause"
] | null | null | null | im/kibot/data/load/__init__.py | ajmal017/amp | 8de7e3b88be87605ec3bad03c139ac64eb460e5c | [
"BSD-3-Clause"
] | null | null | null | from .kibot_s3_data_loader import KibotS3DataLoader # noqa
from .kibot_sql_data_loader import KibotSqlDataLoader # noqa
| 40.666667 | 61 | 0.852459 | 16 | 122 | 6.125 | 0.625 | 0.183673 | 0.326531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.114754 | 122 | 2 | 62 | 61 | 0.888889 | 0.07377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ba1edf3a6e31007614ffae6220cf0c2f709d7018 | 36 | py | Python | django_deployer/fabfile.py | natea/django-deployer | 5ce7d972db2f8500ec53ad89e7eb312d3360d074 | [
"MIT"
] | 19 | 2015-02-06T06:14:39.000Z | 2021-01-06T22:27:03.000Z | django_deployer/fabfile.py | natea/django-deployer | 5ce7d972db2f8500ec53ad89e7eb312d3360d074 | [
"MIT"
] | null | null | null | django_deployer/fabfile.py | natea/django-deployer | 5ce7d972db2f8500ec53ad89e7eb312d3360d074 | [
"MIT"
] | 2 | 2015-12-22T17:22:15.000Z | 2016-03-02T12:15:01.000Z | from django_deployer.tasks import *
| 18 | 35 | 0.833333 | 5 | 36 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e87b85794d0990b9f23e654d5b13395df77c1dc9 | 196 | py | Python | gym_brt/control/__init__.py | Data-Science-in-Mechanical-Engineering/vision-based-furuta-pendulum | 84bfc5a089a2a8ace250f030f0298d45a3f9772f | [
"MIT"
] | null | null | null | gym_brt/control/__init__.py | Data-Science-in-Mechanical-Engineering/vision-based-furuta-pendulum | 84bfc5a089a2a8ace250f030f0298d45a3f9772f | [
"MIT"
] | null | null | null | gym_brt/control/__init__.py | Data-Science-in-Mechanical-Engineering/vision-based-furuta-pendulum | 84bfc5a089a2a8ace250f030f0298d45a3f9772f | [
"MIT"
] | null | null | null | from gym_brt.control.control import dampen_policy, QubeFlipUpControl, QubeHoldControl, RandomControl, NoControl
from gym_brt.control.calibration import CalibrCtrl, GoToLimCtrl, PIDCtrl, calibrate
| 65.333333 | 111 | 0.867347 | 22 | 196 | 7.590909 | 0.727273 | 0.083832 | 0.11976 | 0.203593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076531 | 196 | 2 | 112 | 98 | 0.922652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e89978ae2b1683ac3de3b5aadf2238ed807852fb | 48 | py | Python | src/core/lexer/__init__.py | hyper-neutrino/avl | b639c066a365eb370de61de57eb610ab128f433c | [
"MIT"
] | null | null | null | src/core/lexer/__init__.py | hyper-neutrino/avl | b639c066a365eb370de61de57eb610ab128f433c | [
"MIT"
] | null | null | null | src/core/lexer/__init__.py | hyper-neutrino/avl | b639c066a365eb370de61de57eb610ab128f433c | [
"MIT"
] | null | null | null | from .lexer import lex
from .token import Token
| 16 | 24 | 0.791667 | 8 | 48 | 4.75 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 2 | 25 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e8ad954dca2b7fee4849d2d5573d37eea2803b8e | 16,813 | py | Python | src/main/python/cfn_sphere/cli.py | rayhwang-kcom/cfn-sphere | e5a3642bea1d16611c178feb93ff89e1f2f188e9 | [
"Apache-2.0"
] | 3 | 2018-08-23T14:36:36.000Z | 2020-06-27T23:30:32.000Z | src/main/python/cfn_sphere/cli.py | rayhwang-kcom/cfn-sphere | e5a3642bea1d16611c178feb93ff89e1f2f188e9 | [
"Apache-2.0"
] | 19 | 2017-09-29T13:43:27.000Z | 2021-02-09T10:39:44.000Z | src/main/python/cfn_sphere/cli.py | rayhwang-kcom/cfn-sphere | e5a3642bea1d16611c178feb93ff89e1f2f188e9 | [
"Apache-2.0"
] | 3 | 2019-02-18T09:36:35.000Z | 2020-06-27T23:30:34.000Z | # Modifications copyright (C) 2017 KCOM
import logging
import sys
import boto3
import botocore.session
import click
import os.path
import re
from botocore.credentials import JSONFileCache
from botocore.exceptions import ClientError, BotoCoreError
from cfn_sphere import StackActionHandler
from cfn_sphere import __version__
from cfn_sphere.aws.cfn import CloudFormation
from cfn_sphere.aws.kms import KMS
from cfn_sphere.exceptions import CfnSphereException
from cfn_sphere.file_loader import FileLoader
from cfn_sphere.stack_configuration import Config
from cfn_sphere.template.transformer import CloudFormationTemplateTransformer
from cfn_sphere.util import convert_file, get_logger, get_latest_version
LOGGER = get_logger(root=True)
def get_first_account_alias_or_account_id():
try:
return boto3.client('iam').list_account_aliases()["AccountAliases"][0]
except IndexError:
return boto3.client('sts').get_caller_identity()["Arn"].split(":")[4]
except (BotoCoreError, ClientError) as e:
LOGGER.error(e)
sys.exit(1)
except Exception as e:
LOGGER.error("Unknown error occurred loading users account alias")
LOGGER.exception(e)
LOGGER.info("Please report at https://github.com/KCOM-Enterprise/cfn-square/issues!")
sys.exit(1)
def check_update_available():
latest_version = get_latest_version()
if latest_version and __version__ != latest_version:
click.confirm(
"There is an update available (v: {0}).\n"
"Changelog: https://github.com/cfn-sphere/cfn-sphere/issues?q=milestone%3A{0}+\n"
"Do you want to continue?".format(latest_version), abort=True)
@click.group(help="This tool manages AWS CloudFormation templates "
"and stacks by providing an application scope and useful tooling.")
@click.version_option(version=__version__)
def cli():
pass
@cli.command(help="create change set")
@click.argument('config', type=click.Path(exists=True))
@click.option('--profile', default=None, envvar='AWS_PROFILE', type=click.STRING,
help='Use a specific profile from your credential file')
@click.option('--parameter', '-p', default=None, envvar='CFN_SPHERE_PARAMETERS', type=click.STRING, multiple=True,
help="Stack parameter to overwrite, eg: --parameter stack1.p1=v1")
@click.option('--context', '-t', default=None, envvar='CFN_SPHERE_TRANSFORM_CONTEXT', type=click.STRING, multiple=False,
help="transform context yaml")
@click.option('--debug', '-d', is_flag=True, default=False, envvar='CFN_SPHERE_DEBUG', help="Debug output")
@click.option('--confirm', '-c', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes")
@click.option('--yes', '-y', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes (alias for -c/--confirm")
@click.option('--dry_run', '-n', is_flag=True, default=False, envvar='CFN_SPHERE_DRY_RUN',
help="Dry run.")
def create_change_set(config, profile, parameter, debug, confirm, yes, context, dry_run):
_set_profile(profile)
confirm = confirm or yes
if debug:
LOGGER.setLevel(logging.DEBUG)
boto3.set_stream_logger(name='boto3', level=logging.DEBUG)
boto3.set_stream_logger(name='botocore', level=logging.DEBUG)
else:
LOGGER.setLevel(logging.INFO)
if not confirm:
check_update_available()
click.confirm('This action will modify AWS infrastructure in account: {0}\nAre you sure?'.format(
get_first_account_alias_or_account_id()), abort=True)
try:
config = Config(config_file=config, cli_params=parameter, transform_context=context)
StackActionHandler(config, dry_run).create_change_set()
except CfnSphereException as e:
LOGGER.error(e)
if debug:
LOGGER.exception(e)
sys.exit(1)
except Exception as e:
LOGGER.error("Failed with unexpected error")
LOGGER.exception(e)
LOGGER.info("Please report at https://github.com/KCOM-Enterprise/cfn-square/issues!")
sys.exit(1)
@cli.command(help="execute change set")
@click.argument('change_set')
@click.option('--profile', default=None, envvar='AWS_PROFILE', type=click.STRING,
help='Use a specific profile from your credential file')
@click.option('--debug', '-d', is_flag=True, default=False, envvar='CFN_SPHERE_DEBUG', help="Debug output")
@click.option('--confirm', '-c', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes")
@click.option('--yes', '-y', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes (alias for -c/--confirm")
@click.option('--region', '-r', default='eu-west-1', type=click.STRING, help="Change set region")
def execute_change_set(change_set, profile, debug, confirm, yes, region):
_set_profile(profile)
confirm = confirm or yes
if debug:
LOGGER.setLevel(logging.DEBUG)
boto3.set_stream_logger(name='boto3', level=logging.DEBUG)
boto3.set_stream_logger(name='botocore', level=logging.DEBUG)
else:
LOGGER.setLevel(logging.INFO)
if not confirm:
check_update_available()
click.confirm('This action will modify AWS infrastructure in account: {0}\nAre you sure?'.format(
get_first_account_alias_or_account_id()), abort=True)
try:
matched = re.match(r'arn:aws:cloudformation:([^:]+):.*', change_set)
if matched:
LOGGER.info('ARN detected, setting region to {}'.format(matched.group(1)))
region = matched.group(1)
config_dict = {'change_set': change_set, 'region': str(region)}
config = Config(config_dict=config_dict)
StackActionHandler(config).execute_change_set()
except CfnSphereException as e:
LOGGER.error(e)
if debug:
LOGGER.exception(e)
sys.exit(1)
except Exception as e:
LOGGER.error("Failed with unexpected error")
LOGGER.exception(e)
LOGGER.info("Please report at https://github.com/KCOM-Enterprise/cfn-square/issues!")
sys.exit(1)
@cli.command(help="Sync AWS resources with definition file")
@click.argument('config', type=click.Path(exists=True))
@click.option('--profile', default=None, envvar='AWS_PROFILE', type=click.STRING,
help='Use a specific profile from your credential file')
@click.option('--parameter', '-p', default=None, envvar='CFN_SPHERE_PARAMETERS', type=click.STRING, multiple=True,
help="Stack parameter to overwrite, eg: --parameter stack1.p1=v1")
@click.option('--context', '-t', default=None, envvar='CFN_SPHERE_TRANSFORM_CONTEXT', type=click.STRING, multiple=False,
help="transform context yaml")
@click.option('--debug', '-d', is_flag=True, default=False, envvar='CFN_SPHERE_DEBUG', help="Debug output")
@click.option('--confirm', '-c', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes")
@click.option('--yes', '-y', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes (alias for -c/--confirm")
@click.option('--dry_run', '-n', is_flag=True, default=False, envvar='CFN_SPHERE_DRY_RUN',
help="Dry run.")
def sync(config, profile, parameter, debug, confirm, yes, context, dry_run):
_set_profile(profile)
confirm = confirm or yes or dry_run
if debug:
LOGGER.setLevel(logging.DEBUG)
boto3.set_stream_logger(name='boto3', level=logging.DEBUG)
boto3.set_stream_logger(name='botocore', level=logging.DEBUG)
else:
LOGGER.setLevel(logging.INFO)
if not confirm:
check_update_available()
click.confirm('This action will modify AWS infrastructure in account: {0}\nAre you sure?'.format(
get_first_account_alias_or_account_id()), abort=True)
try:
config = Config(config_file=config, cli_params=parameter, transform_context=context)
StackActionHandler(config, dry_run).create_or_update_stacks()
except CfnSphereException as e:
LOGGER.error(e)
if debug:
LOGGER.exception(e)
sys.exit(1)
except Exception as e:
LOGGER.error("Failed with unexpected error")
LOGGER.exception(e)
LOGGER.info("Please report at https://github.com/KCOM-Enterprise/cfn-square/issues!")
sys.exit(1)
@cli.command(help="Delete all stacks in a stack configuration")
@click.argument('config', type=click.Path(exists=True))
@click.option('--profile', default=None, envvar='AWS_PROFILE', type=click.STRING,
help='Use a specific profile from your credential file')
@click.option('--context', '-t', default=None, envvar='CFN_SPHERE_TRANSFORM_CONTEXT', type=click.STRING, multiple=False,
help="transform context yaml")
@click.option('--debug', '-d', is_flag=True, default=False, envvar='CFN_SPHERE_DEBUG', help="Debug output")
@click.option('--confirm', '-c', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes")
@click.option('--yes', '-y', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes (alias for -c/--confirm")
def delete(config, profile, context, debug, confirm, yes):
_set_profile(profile)
confirm = confirm or yes
if debug:
LOGGER.setLevel(logging.DEBUG)
else:
LOGGER.setLevel(logging.INFO)
if not confirm:
check_update_available()
click.confirm('This action will delete all stacks in {0} from account: {1}\nAre you sure?'.format(
config, get_first_account_alias_or_account_id()), abort=True)
try:
config = Config(config, transform_context=context)
StackActionHandler(config).delete_stacks()
except CfnSphereException as e:
LOGGER.error(e)
if debug:
LOGGER.exception(e)
sys.exit(1)
except Exception as e:
LOGGER.error("Failed with unexpected error")
LOGGER.exception(e)
LOGGER.info("Please report at https://github.com/KCOM-Enterprise/cfn-square/issues!")
sys.exit(1)
@cli.command(help="Convert JSON to YAML or vice versa")
@click.argument('template_file', type=click.Path(exists=True))
@click.option('--profile', default=None, envvar='AWS_PROFILE', type=click.STRING,
help='Use a specific profile from your credential file')
@click.option('--debug', '-d', is_flag=True, default=False, envvar='CFN_SPHERE_DEBUG', help="Debug output")
@click.option('--confirm', '-c', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes")
@click.option('--yes', '-y', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes (alias for -c/--confirm")
def convert(template_file, profile, debug, confirm, yes):
_set_profile(profile)
confirm = confirm or yes
if not confirm:
check_update_available()
if debug:
LOGGER.setLevel(logging.DEBUG)
try:
click.echo(convert_file(template_file))
except Exception as e:
LOGGER.error("Error converting {0}:".format(template_file))
LOGGER.exception(e)
sys.exit(1)
@cli.command(help="Render template as it would be used to create/update a stack")
@click.argument('template_file', type=click.Path(exists=True))
@click.option('--profile', default=None, envvar='AWS_PROFILE', type=click.STRING,
help='Use a specific profile from your credential file')
@click.option('--confirm', '-c', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes")
@click.option('--yes', '-y', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes (alias for -c/--confirm")
def render_template(template_file, profile, confirm, yes):
_set_profile(profile)
confirm = confirm or yes
if not confirm:
check_update_available()
loader = FileLoader()
template = loader.get_cloudformation_template(template_file, None)
template = CloudFormationTemplateTransformer.transform_template(template)
click.echo(template.get_pretty_template_json())
@cli.command(help="Validate template with CloudFormation API")
@click.argument('template_file', type=click.Path(exists=True))
@click.option('--profile', default=None, envvar='AWS_PROFILE', type=click.STRING,
help='Use a specific profile from your credential file')
@click.option('--confirm', '-c', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes")
@click.option('--yes', '-y', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes (alias for -c/--confirm")
def validate_template(template_file, profile, confirm, yes):
_set_profile(profile)
confirm = confirm or yes
if not confirm:
check_update_available()
try:
loader = FileLoader()
template = loader.get_cloudformation_template(template_file, None)
template = CloudFormationTemplateTransformer.transform_template(template)
CloudFormation().validate_template(template)
click.echo("Template is valid")
except CfnSphereException as e:
LOGGER.error(e)
sys.exit(1)
except Exception as e:
LOGGER.error("Failed with unexpected error")
LOGGER.exception(e)
LOGGER.info("Please report at https://github.com/KCOM-Enterprise/cfn-square/issues!")
sys.exit(1)
@cli.command(help="Encrypt a given string with AWS Key Management Service")
@click.argument('region', type=str)
@click.argument('keyid', type=str)
@click.argument('cleartext', type=str)
@click.option('--profile', default=None, envvar='AWS_PROFILE', type=click.STRING,
help='Use a specific profile from your credential file')
@click.option('--confirm', '-c', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes")
@click.option('--yes', '-y', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes (alias for -c/--confirm")
def encrypt(region, keyid, cleartext, profile, confirm, yes):
_set_profile(profile)
confirm = confirm or yes
if not confirm:
check_update_available()
try:
cipertext = KMS(region).encrypt(keyid, cleartext)
click.echo("Ciphertext: {0}".format(cipertext))
except CfnSphereException as e:
LOGGER.error(e)
sys.exit(1)
except Exception as e:
LOGGER.error("Failed with unexpected error")
LOGGER.exception(e)
LOGGER.info("Please report at https://github.com/KCOM-Enterprise/cfn-square/issues!")
sys.exit(1)
@cli.command(help="Decrypt a given ciphertext with AWS Key Management Service")
@click.argument('region', type=str)
@click.argument('ciphertext', type=str)
@click.option('--profile', default=None, envvar='AWS_PROFILE', type=click.STRING,
help='Use a specific profile from your credential file')
@click.option('--confirm', '-c', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes")
@click.option('--yes', '-y', is_flag=True, default=False, envvar='CFN_SPHERE_CONFIRM',
help="Override user confirm dialog with yes (alias for -c/--confirm")
def decrypt(region, ciphertext, profile, confirm, yes):
_set_profile(profile)
confirm = confirm or yes
if not confirm:
check_update_available()
try:
cleartext = KMS(region).decrypt(ciphertext)
click.echo("Cleartext: {0}".format(cleartext))
except CfnSphereException as e:
LOGGER.error(e)
sys.exit(1)
except Exception as e:
LOGGER.error("Failed with unexpected error")
LOGGER.exception(e)
LOGGER.info("Please report at https://github.com/KCOM-Enterprise/cfn-square/issues!")
sys.exit(1)
def _set_profile(profile_name):
if profile_name is not None:
cache_dir = os.path.expanduser(os.path.join('~', '.aws', 'cli', 'cache'))
boto3.setup_default_session(profile_name=profile_name)
cred_chain = boto3.DEFAULT_SESSION._session.get_component("credential_provider")
cred_chain.get_provider("assume-role").cache = JSONFileCache(cache_dir)
def main():
cli()
| 43.67013 | 120 | 0.688634 | 2,180 | 16,813 | 5.177064 | 0.108716 | 0.032695 | 0.039872 | 0.037657 | 0.766614 | 0.758019 | 0.749158 | 0.746411 | 0.746411 | 0.746411 | 0 | 0.004149 | 0.182835 | 16,813 | 384 | 121 | 43.783854 | 0.817308 | 0.002201 | 0 | 0.721713 | 0 | 0.003058 | 0.281209 | 0.009479 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042813 | false | 0.003058 | 0.055046 | 0 | 0.103976 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e8ae7de798bd5c51a1da0fc2d5b632574dea3501 | 205 | py | Python | bibliopixel/util/image/directory.py | rec/leds | ed5fd11ed155e7008d4ef6d5b3d82cd7f8b3ed6a | [
"MIT"
] | 253 | 2015-01-03T23:17:57.000Z | 2021-12-14T02:31:08.000Z | bibliopixel/util/image/directory.py | rec/leds | ed5fd11ed155e7008d4ef6d5b3d82cd7f8b3ed6a | [
"MIT"
] | 879 | 2015-01-11T16:07:25.000Z | 2021-12-10T16:24:31.000Z | bibliopixel/util/image/directory.py | rec/leds | ed5fd11ed155e7008d4ef6d5b3d82cd7f8b3ed6a | [
"MIT"
] | 71 | 2015-01-04T01:02:47.000Z | 2022-03-25T18:30:10.000Z | from . import gif
class Writer(gif.Writer):
def __init__(self, writer):
writer.gif_dir = writer.gif_dir or writer.basename
super().__init__(writer)
def write(self):
pass
| 18.636364 | 58 | 0.639024 | 27 | 205 | 4.481481 | 0.518519 | 0.223141 | 0.198347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.258537 | 205 | 10 | 59 | 20.5 | 0.796053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.142857 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
fa42110799fb23bfb1ffe6c17b6a9556b06ed0fc | 122 | py | Python | quadtree/__init__.py | hamolicious/Quad-Tree | 6c17b54f55a45d2627dafe80f898eff54d8d227f | [
"WTFPL"
] | null | null | null | quadtree/__init__.py | hamolicious/Quad-Tree | 6c17b54f55a45d2627dafe80f898eff54d8d227f | [
"WTFPL"
] | null | null | null | quadtree/__init__.py | hamolicious/Quad-Tree | 6c17b54f55a45d2627dafe80f898eff54d8d227f | [
"WTFPL"
] | null | null | null |
from quadtree.primitives import *
from quadtree.intersector import intersects, contains
from quadtree.tree import Tree
| 17.428571 | 53 | 0.827869 | 15 | 122 | 6.733333 | 0.533333 | 0.356436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 122 | 6 | 54 | 20.333333 | 0.95283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa5944157151f64bc815d46461e806abcb198122 | 41 | py | Python | jinahub/indexers/searcher/compound/FaissLMDBSearcher/__init__.py | sauravgarg540/executors | c06a16633767346eee96ec019ae6a171f125f6cb | [
"Apache-2.0"
] | null | null | null | jinahub/indexers/searcher/compound/FaissLMDBSearcher/__init__.py | sauravgarg540/executors | c06a16633767346eee96ec019ae6a171f125f6cb | [
"Apache-2.0"
] | null | null | null | jinahub/indexers/searcher/compound/FaissLMDBSearcher/__init__.py | sauravgarg540/executors | c06a16633767346eee96ec019ae6a171f125f6cb | [
"Apache-2.0"
] | null | null | null | from .faiss_lmdb import FaissLMDBSearcher | 41 | 41 | 0.902439 | 5 | 41 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3afc3eee706ec4fe7f0cdeeb70f2790cac5fd5a6 | 34 | py | Python | yawigle/__init__.py | tabajara-ltd/yawigle | 15dc8dd27345eebf5dbac646c1c23ca303df686f | [
"BSD-3-Clause"
] | 2 | 2021-04-24T22:05:10.000Z | 2021-04-24T22:05:22.000Z | yawigle/__init__.py | tabajara-ltd/yawigle | 15dc8dd27345eebf5dbac646c1c23ca303df686f | [
"BSD-3-Clause"
] | 2 | 2021-02-27T15:37:02.000Z | 2021-02-27T15:40:23.000Z | yawigle/__init__.py | tabajara-ltd/yawigle | 15dc8dd27345eebf5dbac646c1c23ca303df686f | [
"BSD-3-Clause"
] | 1 | 2021-05-04T11:45:56.000Z | 2021-05-04T11:45:56.000Z | from yawigle.yawigle import client | 34 | 34 | 0.882353 | 5 | 34 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d74d5ca4e59c11ae970b937d5da1b4800eba0de7 | 42,057 | py | Python | test/interface/test_filters.py | earwick/sqlalchemy-filters | 68f66a88c5fc842daf56f226f6a1d0e60c1381da | [
"Apache-2.0"
] | 3 | 2022-03-07T16:54:54.000Z | 2022-03-22T10:17:02.000Z | test/interface/test_filters.py | earwick/sqlalchemy-filters | 68f66a88c5fc842daf56f226f6a1d0e60c1381da | [
"Apache-2.0"
] | 1 | 2021-11-10T11:28:27.000Z | 2021-11-16T11:45:20.000Z | test/interface/test_filters.py | earwick/sqlalchemy-filters | 68f66a88c5fc842daf56f226f6a1d0e60c1381da | [
"Apache-2.0"
] | 8 | 2021-11-08T11:38:44.000Z | 2022-03-23T16:19:46.000Z | # -*- coding: utf-8 -*-
import datetime
import pytest
from sqlalchemy import func
from sqlalchemy.orm import joinedload
from sqlalchemy.sql import select
from sqlalchemy_filters import apply_filters
from sqlalchemy_filters.exceptions import (
BadFilterFormat, BadSpec, FieldNotFound
)
from sqlalchemy_filters.models import sqlalchemy_version_cmp
from test.models import Foo, Bar, Qux, Corge
ARRAY_NOT_SUPPORTED = (
"ARRAY type and operators supported only by PostgreSQL"
)
STRING_DATE_TIME_NOT_SUPPORTED = (
"TODO: String Time / DateTime values currently not working as filters by "
"SQLite"
)
@pytest.fixture
def multiple_foos_inserted(session, multiple_bars_inserted):
foo_1 = Foo(id=1, bar_id=1, name='name_1', count=50)
foo_2 = Foo(id=2, bar_id=2, name='name_2', count=100)
foo_3 = Foo(id=3, bar_id=3, name='name_1', count=None)
foo_4 = Foo(id=4, bar_id=4, name='name_4', count=150)
session.add_all([foo_1, foo_2, foo_3, foo_4])
session.commit()
@pytest.fixture
def multiple_bars_inserted(session):
bar_1 = Bar(id=1, name='name_1', count=5)
bar_2 = Bar(id=2, name='name_2', count=10)
bar_3 = Bar(id=3, name='name_1', count=None)
bar_4 = Bar(id=4, name='name_4', count=15)
session.add_all([bar_1, bar_2, bar_3, bar_4])
session.commit()
@pytest.fixture
def multiple_quxs_inserted(session):
qux_1 = Qux(
id=1, name='name_1', count=5,
created_at=datetime.date(2016, 7, 12),
execution_time=datetime.datetime(2016, 7, 12, 1, 5, 9),
expiration_time=datetime.time(1, 5, 9)
)
qux_2 = Qux(
id=2, name='name_2', count=10,
created_at=datetime.date(2016, 7, 13),
execution_time=datetime.datetime(2016, 7, 13, 2, 5, 9),
expiration_time=datetime.time(2, 5, 9)
)
qux_3 = Qux(
id=3, name='name_1', count=None,
created_at=None, execution_time=None, expiration_time=None
)
qux_4 = Qux(
id=4, name='name_4', count=15,
created_at=datetime.date(2016, 7, 14),
execution_time=datetime.datetime(2016, 7, 14, 3, 5, 9),
expiration_time=datetime.time(3, 5, 9)
)
session.add_all([qux_1, qux_2, qux_3, qux_4])
session.commit()
@pytest.fixture
def multiple_corges_inserted(session, is_postgresql):
if is_postgresql:
corge_1 = Corge(id=1, name='name_1', tags=[])
corge_2 = Corge(id=2, name='name_2', tags=['foo'])
corge_3 = Corge(id=3, name='name_3', tags=['foo', 'bar'])
corge_4 = Corge(id=4, name='name_4', tags=['bar', 'baz'])
session.add_all([corge_1, corge_2, corge_3, corge_4])
session.commit()
class TestFiltersNotApplied:
def test_no_filters_provided(self, session):
query = session.query(Bar)
filters = []
filtered_query = apply_filters(query, filters)
assert query == filtered_query
@pytest.mark.parametrize('filter_', ['some text', 1, ''])
def test_wrong_filters_format(self, session, filter_):
query = session.query(Bar)
filters = [filter_]
with pytest.raises(BadFilterFormat) as err:
apply_filters(query, filters)
expected_error = 'Filter spec `{}` should be a dictionary.'.format(
filter_
)
assert expected_error == err.value.args[0]
def test_invalid_operator(self, session):
query = session.query(Bar)
filters = [{'field': 'name', 'op': 'op_not_valid', 'value': 'name_1'}]
with pytest.raises(BadFilterFormat) as err:
apply_filters(query, filters)
assert 'Operator `op_not_valid` not valid.' == err.value.args[0]
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_no_operator_provided(self, session):
query = session.query(Bar)
filters = [{'field': 'name', 'value': 'name_1'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[1].id == 3
def test_no_field_provided(self, session):
query = session.query(Bar)
filters = [{'op': '==', 'value': 'name_1'}]
with pytest.raises(BadFilterFormat) as err:
apply_filters(query, filters)
expected_error = '`field` is a mandatory filter attribute.'
assert expected_error == err.value.args[0]
# TODO: replace this test once we add the option to compare against
# another field
def test_no_value_provided(self, session):
query = session.query(Bar)
filters = [{'field': 'name', 'op': '==', }]
with pytest.raises(BadFilterFormat) as err:
apply_filters(query, filters)
assert '`value` must be provided.' == err.value.args[0]
def test_invalid_field(self, session):
query = session.query(Bar)
filters = [{'field': 'invalid_field', 'op': '==', 'value': 'name_1'}]
with pytest.raises(FieldNotFound) as err:
apply_filters(query, filters)
expected_error = (
"Model <class 'test.models.Bar'> has no column `invalid_field`."
)
assert expected_error == err.value.args[0]
@pytest.mark.parametrize('attr_name', [
'metadata', # model attribute
'foos', # model relationship
])
def test_invalid_field_but_valid_model_attribute(self, session, attr_name):
query = session.query(Bar)
filters = [{'field': attr_name, 'op': '==', 'value': 'name_1'}]
with pytest.raises(FieldNotFound) as err:
apply_filters(query, filters)
expected_error = (
"Model <class 'test.models.Bar'> has no column `{}`.".format(
attr_name
)
)
assert expected_error == err.value.args[0]
class TestMultipleModels:
# TODO: multi-model should be tested for each filter type
@pytest.mark.usefixtures('multiple_bars_inserted')
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_multiple_models(self, session):
query = session.query(Bar, Qux)
filters = [
{'model': 'Bar', 'field': 'name', 'op': '==', 'value': 'name_1'},
{'model': 'Qux', 'field': 'name', 'op': '==', 'value': 'name_1'},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 4
bars, quxs = zip(*result)
assert set(map(type, bars)) == {Bar}
assert {bar.id for bar in bars} == {1, 3}
assert {bar.name for bar in bars} == {"name_1"}
assert set(map(type, quxs)) == {Qux}
assert {qux.id for qux in quxs} == {1, 3}
assert {qux.name for qux in quxs} == {"name_1"}
class TestAutoJoin:
@pytest.mark.usefixtures('multiple_foos_inserted')
def test_auto_join(self, session):
query = session.query(Foo)
filters = [
{'field': 'name', 'op': '==', 'value': 'name_1'},
{'model': 'Bar', 'field': 'count', 'op': 'is_null'},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 3
assert result[0].bar_id == 3
assert result[0].bar.count is None
@pytest.mark.usefixtures('multiple_foos_inserted')
def test_do_not_auto_join(self, session):
query = session.query(Foo)
filters = [
{'field': 'name', 'op': '==', 'value': 'name_1'},
{'model': 'Bar', 'field': 'count', 'op': 'is_null'},
]
with pytest.raises(BadSpec) as exc:
apply_filters(query, filters, do_auto_join=False)
assert 'The query does not contain model `Bar`' in str(exc)
@pytest.mark.usefixtures('multiple_foos_inserted')
def test_noop_if_query_contains_named_models(self, session):
query = session.query(Foo).join(Bar)
filters = [
{'model': 'Foo', 'field': 'name', 'op': '==', 'value': 'name_1'},
{'model': 'Bar', 'field': 'count', 'op': 'is_null'},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 3
assert result[0].bar_id == 3
assert result[0].bar.count is None
@pytest.mark.usefixtures('multiple_foos_inserted')
def test_auto_join_to_invalid_model(self, session):
query = session.query(Foo)
filters = [
{'field': 'name', 'op': '==', 'value': 'name_1'},
{'model': 'Bar', 'field': 'count', 'op': 'is_null'},
{'model': 'Qux', 'field': 'created_at', 'op': 'is_not_null'}
]
with pytest.raises(BadSpec) as err:
apply_filters(query, filters)
assert 'The query does not contain model `Qux`.' == err.value.args[0]
@pytest.mark.usefixtures('multiple_foos_inserted')
def test_ambiguous_query(self, session):
query = session.query(Foo).join(Bar)
filters = [
{'field': 'name', 'op': '==', 'value': 'name_1'}, # ambiguous
{'model': 'Bar', 'field': 'count', 'op': 'is_null'},
]
with pytest.raises(BadSpec) as err:
apply_filters(query, filters)
assert 'Ambiguous spec. Please specify a model.' == err.value.args[0]
@pytest.mark.usefixtures('multiple_foos_inserted')
def test_eager_load(self, session):
# behaves as if the joinedload wasn't present
query = session.query(Foo).options(joinedload(Foo.bar))
filters = [
{'field': 'name', 'op': '==', 'value': 'name_1'},
{'model': 'Bar', 'field': 'count', 'op': 'is_null'},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 3
assert result[0].bar_id == 3
assert result[0].bar.count is None
class TestApplyIsNullFilter:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_filter_field_with_null_values(self, session):
query = session.query(Bar)
filters = [{'field': 'count', 'op': 'is_null'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 3
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_filter_field_with_no_null_values(self, session):
query = session.query(Bar)
filters = [{'field': 'name', 'op': 'is_null'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 0
class TestApplyIsNotNullFilter:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_filter_field_with_null_values(self, session):
query = session.query(Bar)
filters = [{'field': 'count', 'op': 'is_not_null'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 3
assert result[0].id == 1
assert result[1].id == 2
assert result[2].id == 4
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_filter_field_with_no_null_values(self, session):
query = session.query(Bar)
filters = [{'field': 'name', 'op': 'is_not_null'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 4
assert result[0].id == 1
assert result[1].id == 2
assert result[2].id == 3
assert result[3].id == 4
class TestApplyFiltersMultipleTimes:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_concatenate_queries(self, session):
query = session.query(Bar)
filters = [{'field': 'name', 'op': '==', 'value': 'name_1'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[0].name == 'name_1'
assert result[1].id == 3
assert result[1].name == 'name_1'
filters = [{'field': 'id', 'op': '==', 'value': 3}]
filtered_query = apply_filters(filtered_query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 3
assert result[0].name == 'name_1'
class TestApplyFilterWithoutList:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_a_single_dict_can_be_supplied_as_filters(self, session):
query = session.query(Bar)
filters = {'field': 'name', 'op': '==', 'value': 'name_1'}
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[0].name == 'name_1'
assert result[1].id == 3
assert result[1].name == 'name_1'
class TestApplyFilterOnFieldBasedQuery:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_apply_filter_on_single_field_query(self, session):
query = session.query(Bar.id)
filters = [{'field': 'name', 'op': '==', 'value': 'name_1'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0] == (1,)
assert result[1] == (3,)
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_apply_filter_on_aggregate_query(self, session):
query = session.query(func.count(Bar.id))
filters = [{'field': 'name', 'op': '==', 'value': 'name_1'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0] == (2,)
class TestApplyEqualToFilter:
@pytest.mark.parametrize('operator', ['==', 'eq'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_one_filter_applied_to_a_single_model(self, session, operator):
query = session.query(Bar)
filters = [{'field': 'name', 'op': operator, 'value': 'name_1'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[0].name == 'name_1'
assert result[1].id == 3
assert result[1].name == 'name_1'
@pytest.mark.parametrize(
'filters', [
[ # filters using `==` in a list
{'field': 'name', 'op': '==', 'value': 'name_1'},
{'field': 'id', 'op': '==', 'value': 3}
],
( # filters using `eq` in a tuple
{'field': 'name', 'op': 'eq', 'value': 'name_1'},
{'field': 'id', 'op': 'eq', 'value': 3}
)
]
)
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_multiple_filters_applied_to_a_single_model(
self, session, filters
):
query = session.query(Bar)
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 3
assert result[0].name == 'name_1'
class TestApplyNotEqualToFilter:
@pytest.mark.parametrize('operator', ['!=', 'ne'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_one_filter_applied_to_a_single_model(self, session, operator):
query = session.query(Bar)
filters = [{'field': 'name', 'op': operator, 'value': 'name_1'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 2
assert result[0].name == 'name_2'
assert result[1].id == 4
assert result[1].name == 'name_4'
@pytest.mark.parametrize('operator', ['!=', 'ne'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_multiple_filters_applied_to_a_single_model(
self, session, operator
):
query = session.query(Bar)
filters = [
{'field': 'name', 'op': operator, 'value': 'name_2'},
{'field': 'id', 'op': operator, 'value': 3}
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[0].name == 'name_1'
assert result[1].id == 4
assert result[1].name == 'name_4'
class TestApplyGreaterThanFilter:
@pytest.mark.parametrize('operator', ['>', 'gt'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_one_filter_applied_to_a_single_model(self, session, operator):
query = session.query(Bar)
filters = [{'field': 'count', 'op': operator, 'value': '5'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 2
assert result[1].id == 4
@pytest.mark.parametrize('operator', ['>', 'gt'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_multiple_filters_applied_to_a_single_model(
self, session, operator
):
query = session.query(Bar)
filters = [
{'field': 'count', 'op': operator, 'value': '5'},
{'field': 'id', 'op': operator, 'value': 2},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 4
class TestApplyLessThanFilter:
@pytest.mark.parametrize('operator', ['<', 'lt'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_one_filter_applied_to_a_single_model(self, session, operator):
query = session.query(Bar)
filters = [{'field': 'count', 'op': operator, 'value': '7'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 1
@pytest.mark.parametrize('operator', ['<', 'lt'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_multiple_filters_applied_to_a_single_model(
self, session, operator
):
query = session.query(Bar)
filters = [
{'field': 'count', 'op': operator, 'value': '7'},
{'field': 'id', 'op': operator, 'value': 1},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 0
class TestApplyGreaterOrEqualThanFilter:
@pytest.mark.parametrize('operator', ['>=', 'ge'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_one_filter_applied_to_a_single_model(self, session, operator):
query = session.query(Bar)
filters = [{'field': 'count', 'op': operator, 'value': '5'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 3
assert result[0].id == 1
assert result[1].id == 2
assert result[2].id == 4
@pytest.mark.parametrize('operator', ['>=', 'ge'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_multiple_filters_applied_to_a_single_model(
self, session, operator
):
query = session.query(Bar)
filters = [
{'field': 'count', 'op': operator, 'value': '5'},
{'field': 'id', 'op': operator, 'value': 4},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 4
class TestApplyLessOrEqualThanFilter:
@pytest.mark.parametrize('operator', ['<=', 'le'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_one_filter_applied_to_a_single_model(self, session, operator):
query = session.query(Bar)
filters = [{'field': 'count', 'op': operator, 'value': '15'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 3
assert result[0].id == 1
assert result[1].id == 2
assert result[2].id == 4
@pytest.mark.parametrize('operator', ['<=', 'le'])
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_multiple_filters_applied_to_a_single_model(
self, session, operator
):
query = session.query(Bar)
filters = [
{'field': 'count', 'op': operator, 'value': '15'},
{'field': 'id', 'op': operator, 'value': 1},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 1
class TestApplyLikeFilter:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_one_filter_applied_to_a_single_model(self, session):
query = session.query(Bar)
filters = [{'field': 'name', 'op': 'like', 'value': '%me_1'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[1].id == 3
class TestApplyILikeFilter:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_one_filter_applied_to_a_single_model(self, session):
query = session.query(Bar)
filters = [{'field': 'name', 'op': 'ilike', 'value': '%ME_1'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[1].id == 3
class TestApplyNotILikeFilter:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_one_filter_applied_to_a_single_model(self, session):
query = session.query(Bar)
filters = [{'field': 'name', 'op': 'not_ilike', 'value': '%ME_1'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 2
assert result[1].id == 4
class TestApplyInFilter:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_field_not_in_value_list(self, session):
query = session.query(Bar)
filters = [{'field': 'count', 'op': 'in', 'value': [1, 2, 3]}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 0
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_field_in_value_list(self, session):
query = session.query(Bar)
filters = [{'field': 'count', 'op': 'in', 'value': [15, 2, 3]}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 4
class TestApplyNotInFilter:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_field_not_in_value_list(self, session):
query = session.query(Bar)
filters = [{'field': 'count', 'op': 'not_in', 'value': [1, 2, 3]}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 3
assert result[0].id == 1
assert result[1].id == 2
assert result[2].id == 4
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_field_in_value_list(self, session):
query = session.query(Bar)
filters = [{'field': 'count', 'op': 'not_in', 'value': [15, 2, 3]}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[1].id == 2
class TestDateFields:
@pytest.mark.parametrize(
'value',
[
datetime.date(2016, 7, 14),
datetime.date(2016, 7, 14).isoformat()
]
)
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_filter_date_equality(self, session, value):
query = session.query(Qux)
filters = [{
'field': 'created_at',
'op': '==',
'value': value
}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].created_at == datetime.date(2016, 7, 14)
@pytest.mark.parametrize(
'value',
[
datetime.date(2016, 7, 13),
datetime.date(2016, 7, 13).isoformat()
]
)
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_filter_multiple_dates(self, session, value):
query = session.query(Qux)
filters = [{
'field': 'created_at',
'op': '>=',
'value': value
}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].created_at == datetime.date(2016, 7, 13)
assert result[1].created_at == datetime.date(2016, 7, 14)
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_null_date(self, session):
query = session.query(Qux)
filters = [{'field': 'created_at', 'op': 'is_null'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].created_at is None
class TestTimeFields:
@pytest.mark.parametrize(
'value',
[
datetime.time(3, 5, 9),
datetime.time(3, 5, 9).isoformat() # '03:05:09'
]
)
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_filter_time_equality(self, session, is_sqlite, value):
if isinstance(value, str) and is_sqlite:
pytest.skip(STRING_DATE_TIME_NOT_SUPPORTED)
query = session.query(Qux)
filters = [{'field': 'expiration_time', 'op': '==', 'value': value}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].expiration_time == datetime.time(3, 5, 9)
@pytest.mark.parametrize(
'value',
[
datetime.time(2, 5, 9),
datetime.time(2, 5, 9).isoformat() # '02:05:09'
]
)
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_filter_multiple_times(self, session, is_sqlite, value):
if isinstance(value, str) and is_sqlite:
pytest.skip(STRING_DATE_TIME_NOT_SUPPORTED)
query = session.query(Qux)
filters = [{
'field': 'expiration_time',
'op': '>=',
'value': value
}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].expiration_time == datetime.time(2, 5, 9)
assert result[1].expiration_time == datetime.time(3, 5, 9)
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_null_time(self, session):
query = session.query(Qux)
filters = [{'field': 'expiration_time', 'op': 'is_null'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].expiration_time is None
class TestDateTimeFields:
@pytest.mark.parametrize(
'value',
[
datetime.datetime(2016, 7, 14, 3, 5, 9),
# '2016-07-14T03:05:09'
datetime.datetime(2016, 7, 14, 3, 5, 9).isoformat()
]
)
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_filter_datetime_equality(self, session, is_sqlite, value):
if isinstance(value, str) and is_sqlite:
pytest.skip(STRING_DATE_TIME_NOT_SUPPORTED)
query = session.query(Qux)
filters = [{
'field': 'execution_time',
'op': '==',
'value': value
}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].execution_time == datetime.datetime(
2016, 7, 14, 3, 5, 9
)
@pytest.mark.parametrize(
'value',
[
datetime.datetime(2016, 7, 13, 2, 5, 9),
# '2016-07-13T02:05:09'
datetime.datetime(2016, 7, 13, 2, 5, 9).isoformat()
]
)
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_filter_multiple_datetimes(self, session, is_sqlite, value):
if isinstance(value, str) and is_sqlite:
pytest.skip(STRING_DATE_TIME_NOT_SUPPORTED)
query = session.query(Qux)
filters = [{
'field': 'execution_time',
'op': '>=',
'value': value
}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].execution_time == datetime.datetime(
2016, 7, 13, 2, 5, 9
)
assert result[1].execution_time == datetime.datetime(
2016, 7, 14, 3, 5, 9
)
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_null_datetime(self, session):
query = session.query(Qux)
filters = [{'field': 'execution_time', 'op': 'is_null'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].execution_time is None
class TestApplyBooleanFunctions:
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_or(self, session):
query = session.query(Bar)
filters = [
{'or': [
{'field': 'id', 'op': '==', 'value': 1},
{'field': 'id', 'op': '==', 'value': 3},
]},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[1].id == 3
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_or_with_one_arg(self, session):
query = session.query(Bar)
filters = [
{'or': [
{'field': 'id', 'op': '==', 'value': 1},
]},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 1
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_or_with_three_args(self, session):
query = session.query(Bar)
filters = [
{'or': [
{'field': 'id', 'op': '==', 'value': 1},
{'field': 'id', 'op': '==', 'value': 3},
{'field': 'id', 'op': '==', 'value': 4},
]},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 3
assert result[0].id == 1
assert result[1].id == 3
assert result[2].id == 4
@pytest.mark.parametrize(
('or_args', 'expected_error'), [
(
[],
'`or` must have one or more arguments'
),
(
{},
'`or` value must be an iterable across the function arguments'
),
(
'hello',
'`or` value must be an iterable across the function arguments'
),
]
)
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_or_with_bad_format(self, session, or_args, expected_error):
query = session.query(Bar)
filters = [{'or': or_args}]
with pytest.raises(BadFilterFormat) as exc:
apply_filters(query, filters)
assert expected_error in str(exc)
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_and(self, session):
query = session.query(Bar)
filters = [
{'and': [
{'field': 'id', 'op': '<=', 'value': 2},
{'field': 'count', 'op': '>=', 'value': 6},
]},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 2
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_and_with_one_arg(self, session):
query = session.query(Bar)
filters = [
{'and': [
{'field': 'id', 'op': '==', 'value': 3},
]},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 3
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_and_with_three_args(self, session):
query = session.query(Bar)
filters = [
{'and': [
{'field': 'id', 'op': '<=', 'value': 3},
{'field': 'name', 'op': '==', 'value': 'name_1'},
{'field': 'count', 'op': 'is_not_null'},
]},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 1
@pytest.mark.parametrize(
('and_args', 'expected_error'), [
(
[],
'`and` must have one or more arguments'
),
(
{},
'`and` value must be an iterable across the function arguments'
),
(
'hello',
'`and` value must be an iterable across the function arguments'
),
]
)
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_and_with_bad_format(self, session, and_args, expected_error):
query = session.query(Bar)
filters = [{'and': and_args}]
with pytest.raises(BadFilterFormat) as exc:
apply_filters(query, filters)
assert expected_error in str(exc)
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_not(self, session):
query = session.query(Bar)
filters = [
{'not': [
{'field': 'id', 'op': '==', 'value': 3},
]},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 3
assert result[0].id == 1
assert result[1].id == 2
assert result[2].id == 4
@pytest.mark.parametrize(
('not_args', 'expected_error'), [
(
[{'field': 'id', 'op': '==', 'value': 1},
{'field': 'id', 'op': '==', 'value': 2}],
'`not` must have one argument'
),
(
[],
'`not` must have one argument'
),
(
{},
'`not` value must be an iterable across the function arguments'
),
(
'hello',
'`not` value must be an iterable across the function arguments'
),
]
)
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_not_with_bad_format(self, session, not_args, expected_error):
query = session.query(Bar)
filters = [{'not': not_args}]
with pytest.raises(BadFilterFormat) as exc:
apply_filters(query, filters)
assert expected_error in str(exc)
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_complex(self, session):
query = session.query(Bar)
filters = [
{
'and': [
{
'or': [
{'field': 'id', 'op': '==', 'value': 2},
{'field': 'id', 'op': '==', 'value': 3},
]
},
{
'not': [
{'field': 'name', 'op': '==', 'value': 'name_2'}
]
},
],
}
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 3
@pytest.mark.usefixtures('multiple_bars_inserted')
def test_complex_using_tuples(self, session):
query = session.query(Bar)
filters = (
{
'and': (
{
'or': (
{'field': 'id', 'op': '==', 'value': 2},
{'field': 'id', 'op': '==', 'value': 3},
)
},
{
'not': (
{'field': 'name', 'op': '==', 'value': 'name_2'},
)
},
),
},
)
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
assert result[0].id == 3
class TestApplyArrayFilters:
@pytest.mark.usefixtures('multiple_corges_inserted')
def test_any_value_in_array(self, session, is_postgresql):
if not is_postgresql:
pytest.skip(ARRAY_NOT_SUPPORTED)
query = session.query(Corge)
filters = [{'field': 'tags', 'op': 'any', 'value': 'foo'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 2
assert result[1].id == 3
@pytest.mark.usefixtures('multiple_corges_inserted')
def test_not_any_values_in_array(self, session, is_postgresql):
if not is_postgresql:
pytest.skip(ARRAY_NOT_SUPPORTED)
query = session.query(Corge)
filters = [{'field': 'tags', 'op': 'not_any', 'value': 'foo'}]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
assert result[0].id == 1
assert result[1].id == 4
class TestHybridAttributes:
@pytest.mark.usefixtures('multiple_bars_inserted')
@pytest.mark.parametrize(
('field, expected_error'),
[
('foos', "Model <class 'test.models.Bar'> has no column `foos`."),
(
'__mapper__',
"Model <class 'test.models.Bar'> has no column `__mapper__`.",
),
(
'not_valid',
"Model <class 'test.models.Bar'> has no column `not_valid`.",
),
]
)
def test_orm_descriptors_not_valid_hybrid_attributes(
self, session, field, expected_error
):
query = session.query(Bar)
filters = [
{
'model': 'Bar',
'field': field,
'op': '==',
'value': 100
}
]
with pytest.raises(FieldNotFound) as exc:
apply_filters(query, filters)
assert expected_error in str(exc)
@pytest.mark.usefixtures('multiple_bars_inserted')
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_filter_by_hybrid_properties(self, session):
query = session.query(Bar, Qux)
filters = [
{
'model': 'Bar',
'field': 'count_square',
'op': '==',
'value': 100
},
{
'model': 'Qux',
'field': 'count_square',
'op': '>=',
'value': 26
},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 2
bars, quxs = zip(*result)
assert set(map(type, bars)) == {Bar}
assert {bar.id for bar in bars} == {2}
assert {bar.count_square for bar in bars} == {100}
assert set(map(type, quxs)) == {Qux}
assert {qux.id for qux in quxs} == {2, 4}
assert {qux.count_square for qux in quxs} == {100, 225}
@pytest.mark.usefixtures('multiple_bars_inserted')
@pytest.mark.usefixtures('multiple_quxs_inserted')
def test_filter_by_hybrid_methods(self, session):
query = session.query(Bar, Qux)
filters = [
{
'model': 'Bar',
'field': 'three_times_count',
'op': '==',
'value': 30
},
{
'model': 'Qux',
'field': 'three_times_count',
'op': '>=',
'value': 31
},
]
filtered_query = apply_filters(query, filters)
result = filtered_query.all()
assert len(result) == 1
bars, quxs = zip(*result)
assert set(map(type, bars)) == {Bar}
assert {bar.id for bar in bars} == {2}
assert {bar.three_times_count() for bar in bars} == {30}
assert set(map(type, quxs)) == {Qux}
assert {qux.id for qux in quxs} == {4}
assert {qux.three_times_count() for qux in quxs} == {45}
class TestSelectObject:
@pytest.mark.usefixtures('multiple_foos_inserted')
def test_filter_on_select(self, session):
if sqlalchemy_version_cmp('<', '1.4'):
pytest.skip("Sqlalchemy select style 2.0 not supported")
query = select(Foo)
filters = [
{
'model': 'Bar',
'field': 'name',
'op': '==',
'value': 'name_2'
}
]
query = apply_filters(query, filters)
result = session.execute(query).fetchall()
assert len(result) == 1
assert result[0][0].name == 'name_2'
| 31.292411 | 79 | 0.562403 | 4,756 | 42,057 | 4.784693 | 0.055509 | 0.064554 | 0.051547 | 0.072772 | 0.846546 | 0.830858 | 0.804403 | 0.771884 | 0.718843 | 0.689269 | 0 | 0.023561 | 0.295599 | 42,057 | 1,343 | 80 | 31.315711 | 0.744574 | 0.008821 | 0 | 0.626437 | 0 | 0 | 0.128171 | 0.034412 | 0 | 0 | 0 | 0.000745 | 0.181034 | 1 | 0.069923 | false | 0 | 0.008621 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d1290d42252e81ab358de5767b87ffdb66845a3d | 106 | py | Python | maricilib/django/middleware/__init__.py | marici/recipebook | b46e06bf955788462f659d923ef47e329c807f92 | [
"MIT"
] | 2 | 2017-06-04T11:30:04.000Z | 2017-06-21T20:17:34.000Z | maricilib/django/middleware/__init__.py | marici/recipebook | b46e06bf955788462f659d923ef47e329c807f92 | [
"MIT"
] | null | null | null | maricilib/django/middleware/__init__.py | marici/recipebook | b46e06bf955788462f659d923ef47e329c807f92 | [
"MIT"
] | null | null | null | from SSLRedirectMiddleware import SSLRedirectMiddleware
from DebugSQLMiddleware import DebugSQLMiddleware
| 35.333333 | 55 | 0.924528 | 8 | 106 | 12.25 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 106 | 2 | 56 | 53 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d14f304950e5d1e0e377f2c1f688aef8c14c0bcd | 40 | py | Python | src/eAsisitent_scraper/__init__.py | PingWasFun/eAsistent-scraper | dbd2630b48cc07183a93a12d00c73371cbd3f46d | [
"MIT"
] | null | null | null | src/eAsisitent_scraper/__init__.py | PingWasFun/eAsistent-scraper | dbd2630b48cc07183a93a12d00c73371cbd3f46d | [
"MIT"
] | 10 | 2022-03-20T07:11:49.000Z | 2022-03-23T20:22:36.000Z | src/eAsisitent_scraper/__init__.py | PingWasFun/eAsistent-scraper | dbd2630b48cc07183a93a12d00c73371cbd3f46d | [
"MIT"
] | null | null | null | from .scraper import get_schedule_data
| 13.333333 | 38 | 0.85 | 6 | 40 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 2 | 39 | 20 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d16c9cd816667b007e626d1c6651eca57844c77c | 33,934 | py | Python | grdc_seasonal_plots.py | amforte/Caucasus_Erosion | c839c90282f87256220abe390993b362b88b8b74 | [
"MIT"
] | 2 | 2021-05-15T05:04:57.000Z | 2021-12-10T02:25:29.000Z | grdc_seasonal_plots.py | amforte/Caucasus_Erosion | c839c90282f87256220abe390993b362b88b8b74 | [
"MIT"
] | null | null | null | grdc_seasonal_plots.py | amforte/Caucasus_Erosion | c839c90282f87256220abe390993b362b88b8b74 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Plots the event, seasonal, and annual fraction vs a variety of other metrics.
Written by Adam M. Forte for
"Low variability runoff inhibits coupling of climate, tectonics, and
topography in the Greater Caucasus"
If you use this code or derivatives, please cite the original paper.
"""
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
qdf=pd.read_csv('data_tables/grdc_summary_values.csv')
mR=qdf['mean_runoff_mm_day'].to_numpy()
ssn_frac=qdf['seasonal_frac'].to_numpy()
anu_frac=qdf['annual_frac'].to_numpy()
evnt_frac=qdf['event_frac'].to_numpy()
da=qdf['DA_km2'].to_numpy()
mz=qdf['maxz'].to_numpy()/1000
snow=qdf['ssnstd'].to_numpy()
do=qdf['dist_from_sw_km'].to_numpy()
d=np.copy(do)
d[np.isnan(d)]=150
djf_run=qdf['DJF_mean_runoff_mm_day'].to_numpy()
mam_run=qdf['MAM_mean_runoff_mm_day'].to_numpy()
jja_run=qdf['JJA_mean_runoff_mm_day'].to_numpy()
son_run=qdf['SON_mean_runoff_mm_day'].to_numpy()
djf_rain=qdf['mnTRMM_djf_mm_day'].to_numpy()
mam_rain=qdf['mnTRMM_mam_mm_day'].to_numpy()
jja_rain=qdf['mnTRMM_jja_mm_day'].to_numpy()
son_rain=qdf['mnTRMM_son_mm_day'].to_numpy()
pdf=pd.read_csv('result_tables/GRDC_Distribution_Fits.csv')
c=pdf['c_best'].to_numpy()
s=pdf['s_best'].to_numpy()
df=pd.read_csv('result_tables/grdc_basin_clusters.csv')
cluster=df['cluster'].to_numpy().astype('int')
grdc_id=df['grdc_id'].to_numpy().astype('int')
# Colors for clusters
color_list=['maroon','dodgerblue','darkorange','darkolivegreen','crimson','blue']
# Difference in peak
diff_peak=np.zeros((len(grdc_id),3))
for i in range(len(grdc_id)):
fn='data_tables/grdc_daily_means/grdc_'+str(grdc_id[i])+'_mean_daily.csv'
bdf=pd.read_csv(fn)
dn=bdf['day_number'].to_numpy()
mnR=bdf['grdc_smoothed_mean_daily_runoff_mm_day'].to_numpy()
mnP=bdf['trmm_smoothed_mean_daily_rainfall_mm_day'].to_numpy()
rmax=np.argmax(mnR)
rdn=dn[rmax] # Day in year of max runoff
pmax=np.argmax(mnP)
pdn=dn[pmax] # Day in year of max rainfall
# Convert to radians
r_theta=(rdn/365)*2*np.pi
p_theta=(pdn/365)*2*np.pi
# Normalize to runoff angle
p_theta=p_theta-r_theta
r_theta=r_theta-r_theta
# Convert to cartesian
rx=np.cos(r_theta)
ry=np.sin(r_theta)
px=np.cos(p_theta)
py=np.sin(p_theta)
rv=np.array([rx,ry,0])
pv=np.array([px,py,0])
# Find angle between
a=np.arctan2(np.linalg.norm(np.cross(rv,pv)),np.dot(rv,pv))
diff_peak[i,0]=rdn
diff_peak[i,1]=pdn
diff_peak[i,2]=(a/(2*np.pi))*365
dp=diff_peak[:,2]
## Master Figure - Shape
f1=plt.figure(num=100,figsize=(15,20))
axl1=plt.subplot(4,2,1)
axl2=plt.subplot(4,2,3)
axl3=plt.subplot(4,2,5)
axl4=plt.subplot(4,2,7)
axr1=plt.subplot(3,2,2)
axr2=plt.subplot(3,2,4)
axr3=plt.subplot(3,2,6)
lcnum=np.arange(1,8,2)
for i in range(4):
idx=cluster==i
idOI=grdc_id[idx]
mzOI=mz[idx]
dOI=d[idx]
plt.subplot(4,2,lcnum[i])
for j in range(len(idOI)):
fn='data_tables/grdc_daily_means/grdc_'+str(idOI[j])+'_mean_daily.csv'
bdf=pd.read_csv(fn)
dn=bdf['day_number'].to_numpy()
mnR=bdf['grdc_smoothed_mean_daily_runoff_mm_day'].to_numpy()
mnP=bdf['trmm_smoothed_mean_daily_rainfall_mm_day'].to_numpy()
pks_max=np.argmax(mnP)
if mzOI[j]<2.7:
if dOI[j]<100:
plt.plot(dn,mnR,c=color_list[i],linewidth=1)
plt.scatter(dn[pks_max],mnP[pks_max],c=color_list[i],s=20)
else:
plt.plot(dn,mnR,c=color_list[i],linewidth=1,linestyle=':')
plt.scatter(dn[pks_max],mnP[pks_max],edgecolors=color_list[i],c='w',s=20)
else:
if dOI[j]<100:
plt.plot(dn,mnR,c=color_list[i],linewidth=2)
plt.scatter(dn[pks_max],mnP[pks_max],c=color_list[i],s=40,marker='s')
else:
plt.plot(dn,mnR,c=color_list[i],linewidth=2,linestyle=':')
plt.scatter(dn[pks_max],mnP[pks_max],edgecolors=color_list[i],c='w',s=40,marker='s')
plt.axvline(59,c='k',linewidth=0.5,linestyle='--')
plt.axvline(151,c='k',linewidth=0.5,linestyle='--')
plt.axvline(243,c='k',linewidth=0.5,linestyle='--')
plt.axvline(334,c='k',linewidth=0.5,linestyle='--')
plt.xlabel('Day in Year')
plt.ylabel('Smoothed Daily Mean Runoff [mm]')
plt.xlim((0,365))
plt.ylim((0,18))
for j in range(len(idOI)):
fn='data_tables/grdc_daily_means/grdc_'+str(idOI[j])+'_mean_daily.csv'
bdf=pd.read_csv(fn)
dn=bdf['day_number'].to_numpy()
mnR=bdf['grdc_smoothed_mean_daily_runoff_mm_day'].to_numpy()
# Find peak in runoff
rmax=np.argmax(mnR)
rdn=dn[rmax]
# Determine seasons
if np.logical_or(rdn<=59,rdn>334):
# DJF
if dOI[j]<100:
axr1.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
axr3.scatter(c[idx][j],snow[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
else:
axr1.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='o')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='o')
axr3.scatter(c[idx][j],snow[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='o')
elif np.logical_and(rdn>59,rdn<=151):
# MAM '^'
if dOI[j]<100:
axr1.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
axr3.scatter(c[idx][j],snow[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
else:
axr1.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='^')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='^')
axr3.scatter(c[idx][j],snow[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='^')
elif np.logical_and(rdn>151,rdn<=243):
# JJA 's'
if dOI[j]<100:
axr1.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
axr3.scatter(c[idx][j],snow[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
else:
axr1.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='s')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='s')
axr3.scatter(c[idx][j],snow[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='s')
elif np.logical_and(rdn>243,rdn<=334):
# SON 'D'
if dOI[j]<100:
axr1.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
axr3.scatter(c[idx][j],snow[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
else:
axr1.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='D')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='D')
axr3.scatter(c[idx][j],snow[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='D')
## DO IT YOURSELF EXPLANATION
axr2.scatter(80,0.15,s=5*25,c='k')
axr2.scatter(80,0.12,s=4*25,c='k')
axr2.scatter(80,0.09,s=3*25,c='k')
axr2.scatter(80,0.06,s=2*25,c='k')
axr2.scatter(80,0.03,s=1*25,c='k')
axr2.text(90,0.15,'5 km')
axr2.text(90,0.12,'4 km')
axr2.text(90,0.09,'3 km')
axr2.text(90,0.06,'2 km')
axr2.text(90,0.03,'1 km')
axr2.text(80,0.18,'Max Elev.')
axr2.scatter(120,0.15,s=3*25,marker='o',c='k')
axr2.scatter(120,0.12,s=3*25,marker='^',c='k')
axr2.scatter(120,0.09,s=3*25,marker='s',c='k')
axr2.scatter(120,0.06,s=3*25,marker='D',c='k')
axr2.text(130,0.15,'DJF')
axr2.text(130,0.12,'MAM')
axr2.text(130,0.09,'JJA')
axr2.text(130,0.06,'SON')
axr2.text(110,0.18,'Peak Runoff Season')
axr2.scatter(35,0.06,s=3*25,c='k')
axr2.scatter(35,0.03,s=3*25,c='w',edgecolors='k')
axr2.text(45,0.06,'In GC')
axr2.text(45,0.03,'Outside GC')
axr2.text(35,0.09,'Position')
axr1.set_xlabel('Shape')
axr2.set_xlabel('Difference in Peaks [days]')
axr3.set_xlabel('Shape')
axr1.set_ylabel('Seasonal Fraction')
axr2.set_ylabel('Seasonal Fraction')
axr3.set_ylabel('Seasonal Snow STD')
## Master Figure - Scale
f2=plt.figure(num=200,figsize=(15,20))
axl1=plt.subplot(4,2,1)
axl2=plt.subplot(4,2,3)
axl3=plt.subplot(4,2,5)
axl4=plt.subplot(4,2,7)
axr1=plt.subplot(3,2,2)
axr2=plt.subplot(3,2,4)
axr3=plt.subplot(3,2,6)
lcnum=np.arange(1,8,2)
for i in range(4):
idx=cluster==i
idOI=grdc_id[idx]
mzOI=mz[idx]
dOI=d[idx]
plt.subplot(4,2,lcnum[i])
for j in range(len(idOI)):
fn='data_tables/grdc_daily_means/grdc_'+str(idOI[j])+'_mean_daily.csv'
bdf=pd.read_csv(fn)
dn=bdf['day_number'].to_numpy()
mnR=bdf['grdc_smoothed_mean_daily_runoff_mm_day'].to_numpy()
mnP=bdf['trmm_smoothed_mean_daily_rainfall_mm_day'].to_numpy()
pks_max=np.argmax(mnP)
if mzOI[j]<2.7:
if dOI[j]<100:
plt.plot(dn,mnR,c=color_list[i],linewidth=1)
plt.scatter(dn[pks_max],mnP[pks_max],c=color_list[i],s=20)
else:
plt.plot(dn,mnR,c=color_list[i],linewidth=1,linestyle=':')
plt.scatter(dn[pks_max],mnP[pks_max],edgecolors=color_list[i],c='w',s=20)
else:
if dOI[j]<100:
plt.plot(dn,mnR,c=color_list[i],linewidth=2)
plt.scatter(dn[pks_max],mnP[pks_max],c=color_list[i],s=40,marker='s')
else:
plt.plot(dn,mnR,c=color_list[i],linewidth=2,linestyle=':')
plt.scatter(dn[pks_max],mnP[pks_max],edgecolors=color_list[i],c='w',s=40,marker='s')
plt.axvline(59,c='k',linewidth=0.5,linestyle='--')
plt.axvline(151,c='k',linewidth=0.5,linestyle='--')
plt.axvline(243,c='k',linewidth=0.5,linestyle='--')
plt.axvline(334,c='k',linewidth=0.5,linestyle='--')
plt.xlabel('Day in Year')
plt.ylabel('Smoothed Daily Mean Runoff [mm]')
plt.xlim((0,365))
plt.ylim((0,18))
for j in range(len(idOI)):
fn='data_tables/grdc_daily_means/grdc_'+str(idOI[j])+'_mean_daily.csv'
bdf=pd.read_csv(fn)
dn=bdf['day_number'].to_numpy()
mnR=bdf['grdc_smoothed_mean_daily_runoff_mm_day'].to_numpy()
# Find peak in runoff
rmax=np.argmax(mnR)
rdn=dn[rmax]
# Determine seasons
if np.logical_or(rdn<=59,rdn>334):
# DJF
if dOI[j]<100:
axr1.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
axr3.scatter(s[idx][j],snow[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
else:
axr1.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='o')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='o')
axr3.scatter(s[idx][j],snow[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='o')
elif np.logical_and(rdn>59,rdn<=151):
# MAM '^'
if dOI[j]<100:
axr1.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
axr3.scatter(s[idx][j],snow[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
else:
axr1.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='^')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='^')
axr3.scatter(s[idx][j],snow[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='^')
elif np.logical_and(rdn>151,rdn<=243):
# JJA 's'
if dOI[j]<100:
axr1.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
axr3.scatter(s[idx][j],snow[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
else:
axr1.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='s')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='s')
axr3.scatter(s[idx][j],snow[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='s')
elif np.logical_and(rdn>243,rdn<=334):
# SON 'D'
if dOI[j]<100:
axr1.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
axr3.scatter(s[idx][j],snow[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
else:
axr1.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='D')
axr2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='D')
axr3.scatter(s[idx][j],snow[idx][j],s=mz[idx][j]*25,edgecolors=color_list[i],c='w',marker='D')
## DO IT YOURSELF EXPLANATION
axr2.scatter(80,0.15,s=5*25,c='k')
axr2.scatter(80,0.12,s=4*25,c='k')
axr2.scatter(80,0.09,s=3*25,c='k')
axr2.scatter(80,0.06,s=2*25,c='k')
axr2.scatter(80,0.03,s=1*25,c='k')
axr2.text(90,0.15,'5 km')
axr2.text(90,0.12,'4 km')
axr2.text(90,0.09,'3 km')
axr2.text(90,0.06,'2 km')
axr2.text(90,0.03,'1 km')
axr2.text(80,0.18,'Max Elev.')
axr2.scatter(120,0.15,s=3*25,marker='o',c='k')
axr2.scatter(120,0.12,s=3*25,marker='^',c='k')
axr2.scatter(120,0.09,s=3*25,marker='s',c='k')
axr2.scatter(120,0.06,s=3*25,marker='D',c='k')
axr2.text(130,0.15,'DJF')
axr2.text(130,0.12,'MAM')
axr2.text(130,0.09,'JJA')
axr2.text(130,0.06,'SON')
axr2.text(110,0.18,'Peak Runoff Season')
axr2.scatter(35,0.06,s=3*25,c='k')
axr2.scatter(35,0.03,s=3*25,c='w',edgecolors='k')
axr2.text(45,0.06,'In GC')
axr2.text(45,0.03,'Outside GC')
axr2.text(35,0.09,'Position')
axr1.set_xlabel('Scale')
axr2.set_xlabel('Difference in Peaks [days]')
axr3.set_xlabel('Scale')
axr1.set_ylabel('Seasonal Fraction')
axr2.set_ylabel('Seasonal Fraction')
axr3.set_ylabel('Seasonal Snow STD')
# f1.savefig('seasonal_shape.pdf')
# f2.savefig('seasonal_scale.pdf')
# ## Figure 3
# f3=plt.figure(num=3,figsize=(15,15))
# ax1=plt.subplot(2,2,1)
# ax2=plt.subplot(2,2,2)
# ax3=plt.subplot(2,2,3)
# ax4=plt.subplot(2,2,4)
# for i in range(4):
# idx=cluster==i
# ax1.scatter(c[idx],anu_frac[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax2.scatter(c[idx],ssn_frac[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax3.scatter(c[idx],evnt_frac[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax4.scatter(mR[idx],ssn_frac[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax1.set_ylabel('Annual Fraction')
# ax2.set_ylabel('Seasonal Fraction')
# ax3.set_ylabel('Event Fraction')
# ax4.set_ylabel('Seasonal Fraction')
# ax1.set_xlabel('Shape')
# ax2.set_xlabel('Shape')
# ax3.set_xlabel('Shape')
# ax4.set_xlabel('Mean Runoff [mm/day]')
# ## Figure 4
# f4=plt.figure(num=4,figsize=(15,15))
# ax1=plt.subplot(2,2,1)
# ax2=plt.subplot(2,2,2)
# ax3=plt.subplot(2,2,3)
# ax4=plt.subplot(2,2,4)
# for i in range(4):
# idx=cluster==i
# ax1.scatter(s[idx],anu_frac[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax2.scatter(s[idx],ssn_frac[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax3.scatter(s[idx],evnt_frac[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax4.scatter(mR[idx],ssn_frac[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax1.set_ylabel('Annual Fraction')
# ax2.set_ylabel('Seasonal Fraction')
# ax3.set_ylabel('Event Fraction')
# ax4.set_ylabel('Seasonal Fraction')
# ax1.set_xlabel('Scale')
# ax2.set_xlabel('Scale')
# ax3.set_xlabel('Scale')
# ax4.set_xlabel('Mean Runoff [mm/day]')
# ## Figure 5
# f5=plt.figure(num=5,figsize=(15,15))
# ax1=plt.subplot(2,2,1)
# ax2=plt.subplot(2,2,2)
# ax3=plt.subplot(2,2,3)
# ax4=plt.subplot(2,2,4)
# for i in range(4):
# idx=cluster==i
# ax1.scatter(c[idx],djf_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax2.scatter(c[idx],mam_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax3.scatter(c[idx],jja_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax4.scatter(c[idx],son_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax1.set_ylabel('Winter Mean Runoff [mm/day]')
# ax2.set_ylabel('Spring Mean Runoff [mm/day]')
# ax3.set_ylabel('Summer Mean Runoff [mm/day]')
# ax4.set_ylabel('Fall Mean Runoff [mm/day]')
# ax1.set_xlabel('Shape')
# ax2.set_xlabel('Shape')
# ax3.set_xlabel('Shape')
# ax4.set_xlabel('Shape')
# ## Figure 6
# f6=plt.figure(num=6,figsize=(15,15))
# ax1=plt.subplot(2,2,1)
# ax2=plt.subplot(2,2,2)
# ax3=plt.subplot(2,2,3)
# ax4=plt.subplot(2,2,4)
# for i in range(4):
# idx=cluster==i
# ax1.scatter(s[idx],djf_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax2.scatter(s[idx],mam_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax3.scatter(s[idx],jja_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax4.scatter(s[idx],son_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax1.set_ylabel('Winter Mean Runoff [mm/day]')
# ax2.set_ylabel('Spring Mean Runoff [mm/day]')
# ax3.set_ylabel('Summer Mean Runoff [mm/day]')
# ax4.set_ylabel('Fall Mean Runoff [mm/day]')
# ax1.set_xlabel('Scale')
# ax2.set_xlabel('Scale')
# ax3.set_xlabel('Scale')
# ax4.set_xlabel('Scale')
# ## Figure 7
# f7=plt.figure(num=7,figsize=(15,15))
# ax1=plt.subplot(2,2,1)
# ax2=plt.subplot(2,2,2)
# ax3=plt.subplot(2,2,3)
# ax4=plt.subplot(2,2,4)
# ax1.plot(np.array([0,12]),np.array([0,12]),c='k',linestyle=':')
# ax2.plot(np.array([0,12]),np.array([0,12]),c='k',linestyle=':')
# ax3.plot(np.array([0,12]),np.array([0,12]),c='k',linestyle=':')
# ax4.plot(np.array([0,12]),np.array([0,12]),c='k',linestyle=':')
# for i in range(4):
# idx=cluster==i
# ax1.scatter(djf_rain[idx],djf_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax2.scatter(mam_rain[idx],mam_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax3.scatter(jja_rain[idx],jja_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax4.scatter(son_rain[idx],son_run[idx],s=da[idx]/10,c=color_list[i],edgecolors='k')
# ax1.set_ylabel('Winter Mean Runoff [mm/day]')
# ax2.set_ylabel('Spring Mean Runoff [mm/day]')
# ax3.set_ylabel('Summer Mean Runoff [mm/day]')
# ax4.set_ylabel('Fall Mean Runoff [mm/day]')
# ax1.set_xlabel('Winter Mean Rainfall [mm/day]')
# ax2.set_xlabel('Spring Mean Rainfall [mm/day]')
# ax3.set_xlabel('Summer Mean Rainfall [mm/day]')
# ax4.set_xlabel('Fall Mean Rainfall [mm/day]')
# ## Figure 8
# f8=plt.figure(num=8,figsize=(15,15))
# ax1=plt.subplot(2,2,1)
# ax2=plt.subplot(2,2,2)
# ax3=plt.subplot(2,2,3)
# ax4=plt.subplot(2,2,4)
# for i in range(4):
# idx=cluster==i
# idOI=grdc_id[idx]
# for j in range(len(idOI)):
# fn='data_tables/grdc_daily_means/grdc_'+str(idOI[j])+'_mean_daily.csv'
# bdf=pd.read_csv(fn)
# dn=bdf['day_number'].to_numpy()
# mnP=bdf['trmm_smoothed_mean_daily_rainfall_mm_day'].to_numpy()
# # Find peak in rainfall
# pmax=np.argmax(mnP)
# pdn=dn[pmax]
# # Determine seasons
# if np.logical_or(pdn<=59,pdn>334):
# # DJF
# ax1.scatter(c[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# ax2.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# ax3.scatter(c[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# ax4.scatter(mR[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# elif np.logical_and(pdn>59,pdn<=151):
# # MAM
# ax1.scatter(c[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# ax2.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# ax3.scatter(c[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# ax4.scatter(mR[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# elif np.logical_and(pdn>151,pdn<=243):
# # JJA
# ax1.scatter(c[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# ax2.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# ax3.scatter(c[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# ax4.scatter(mR[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# elif np.logical_and(pdn>243,pdn<=334):
# # SON
# ax1.scatter(c[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax2.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax3.scatter(c[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax4.scatter(mR[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax1.set_ylabel('Annual Fraction')
# ax2.set_ylabel('Seasonal Fraction')
# ax3.set_ylabel('Event Fraction')
# ax4.set_ylabel('Seasonal Fraction')
# ax1.set_xlabel('Shape')
# ax2.set_xlabel('Shape')
# ax3.set_xlabel('Shape')
# ax4.set_xlabel('Mean Runoff [mm/day]')
## Figure 9
f9=plt.figure(num=9,figsize=(15,20))
ax1=plt.subplot(3,2,1)
ax2=plt.subplot(3,2,2)
ax3=plt.subplot(3,2,3)
ax4=plt.subplot(3,2,4)
ax5=plt.subplot(3,2,5)
ax6=plt.subplot(3,2,6)
for i in range(4):
idx=cluster==i
idOI=grdc_id[idx]
for j in range(len(idOI)):
fn='data_tables/grdc_daily_means/grdc_'+str(idOI[j])+'_mean_daily.csv'
bdf=pd.read_csv(fn)
dn=bdf['day_number'].to_numpy()
mnR=bdf['grdc_smoothed_mean_daily_runoff_mm_day'].to_numpy()
# Find peak in runoff
rmax=np.argmax(mnR)
rdn=dn[rmax]
# Determine seasons
if np.logical_or(rdn<=59,rdn>334):
# DJF
ax1.scatter(c[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
ax3.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
ax5.scatter(c[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
ax2.scatter(s[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
ax4.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
ax6.scatter(s[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
elif np.logical_and(rdn>59,rdn<=151):
# MAM
ax1.scatter(c[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
ax3.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
ax5.scatter(c[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
ax2.scatter(s[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
ax4.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
ax6.scatter(s[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
elif np.logical_and(rdn>151,rdn<=243):
# JJA
ax1.scatter(c[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
ax3.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
ax5.scatter(c[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
ax2.scatter(s[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
ax4.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
ax6.scatter(s[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
elif np.logical_and(rdn>243,rdn<=334):
# SON
ax1.scatter(c[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
ax3.scatter(c[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
ax5.scatter(c[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
ax2.scatter(s[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
ax4.scatter(s[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
ax6.scatter(s[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
ax1.set_ylabel('Annual Fraction')
ax2.set_ylabel('Annual Fraction')
ax3.set_ylabel('Seasonal Fraction')
ax4.set_ylabel('Seasonal Fraction')
ax5.set_ylabel('Event Fraction')
ax6.set_ylabel('Event Fraction')
ax1.set_xlabel('Shape')
ax3.set_xlabel('Shape')
ax5.set_xlabel('Shape')
ax2.set_xlabel('Scale')
ax4.set_xlabel('Scale')
ax6.set_xlabel('Scale')
ax4.scatter(0.8,0.15,s=5*25,c='k')
ax4.scatter(0.8,0.12,s=4*25,c='k')
ax4.scatter(0.8,0.09,s=3*25,c='k')
ax4.scatter(0.8,0.06,s=2*25,c='k')
ax4.scatter(0.8,0.03,s=1*25,c='k')
ax4.text(0.9,0.15,'5 km')
ax4.text(0.9,0.12,'4 km')
ax4.text(0.9,0.09,'3 km')
ax4.text(0.9,0.06,'2 km')
ax4.text(0.9,0.03,'1 km')
ax4.text(0.9,0.18,'Max Elev.')
ax4.scatter(1.1,0.15,s=3*25,marker='o',c='k')
ax4.scatter(1.1,0.12,s=3*25,marker='^',c='k')
ax4.scatter(1.1,0.09,s=3*25,marker='s',c='k')
ax4.scatter(1.1,0.06,s=3*25,marker='D',c='k')
ax4.text(1.2,0.15,'DJF')
ax4.text(1.2,0.12,'MAM')
ax4.text(1.2,0.09,'JJA')
ax4.text(1.2,0.06,'SON')
ax4.text(1.2,0.18,'Peak Runoff Season')
ax4.scatter(0.4,0.06,s=3*25,c='k')
ax4.scatter(0.4,0.03,s=3*25,c='w',edgecolors='k')
ax2.text(0.45,0.06,'In GC')
ax4.text(0.45,0.03,'Outside GC')
ax4.text(0.35,0.09,'Position')
f9.savefig('fractions.pdf')
# ## FIgure 10
# f10=plt.figure(num=10,figsize=(15,15))
# ax1=plt.subplot(2,2,1)
# ax2=plt.subplot(2,2,2)
# ax3=plt.subplot(2,2,3)
# ax4=plt.subplot(2,2,4)
# for i in range(4):
# idx=cluster==i
# idOI=grdc_id[idx]
# for j in range(len(idOI)):
# fn='data_tables/grdc_daily_means/grdc_'+str(idOI[j])+'_mean_daily.csv'
# bdf=pd.read_csv(fn)
# dn=bdf['day_number'].to_numpy()
# mnR=bdf['grdc_smoothed_mean_daily_runoff_mm_day'].to_numpy()
# # Find peak in runoff
# rmax=np.argmax(mnR)
# rdn=dn[rmax]
# # Determine seasons
# if np.logical_or(rdn<=59,rdn>334):
# # DJF
# ax1.scatter(dp[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# ax2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# ax3.scatter(dp[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# ax4.scatter(mR[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# elif np.logical_and(rdn>59,rdn<=151):
# # MAM
# ax1.scatter(dp[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# ax2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# ax3.scatter(dp[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# ax4.scatter(mR[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# elif np.logical_and(rdn>151,rdn<=243):
# # JJA
# ax1.scatter(dp[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# ax2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# ax3.scatter(dp[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# ax4.scatter(mR[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# elif np.logical_and(rdn>243,rdn<=334):
# # SON
# ax1.scatter(dp[idx][j],anu_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax2.scatter(dp[idx][j],ssn_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax3.scatter(dp[idx][j],evnt_frac[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax4.scatter(mR[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax1.set_ylabel('Annual Fraction')
# ax2.set_ylabel('Seasonal Fraction')
# ax3.set_ylabel('Event Fraction')
# ax4.set_ylabel('Difference in Peaks [days]')
# ax1.set_xlabel('Difference in Peaks [days]')
# ax2.set_xlabel('Difference in Peaks [days]')
# ax3.set_xlabel('Difference in Peaks [days]')
# ax4.set_xlabel('Mean Runoff [mm/day]')
# ## Figure 11
# f11=plt.figure(num=11,figsize=(15,15))
# ax1=plt.subplot(2,2,1)
# ax2=plt.subplot(2,2,2)
# ax3=plt.subplot(2,2,3)
# ax4=plt.subplot(2,2,4)
# for i in range(4):
# idx=cluster==i
# idOI=grdc_id[idx]
# for j in range(len(idOI)):
# fn='data_tables/grdc_daily_means/grdc_'+str(idOI[j])+'_mean_daily.csv'
# bdf=pd.read_csv(fn)
# dn=bdf['day_number'].to_numpy()
# mnR=bdf['grdc_smoothed_mean_daily_runoff_mm_day'].to_numpy()
# # Find peak in runoff
# rmax=np.argmax(mnR)
# rdn=dn[rmax]
# # Determine seasons
# if np.logical_or(rdn<=59,rdn>334):
# # DJF
# ax1.scatter(c[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# ax2.scatter(s[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# ax3.scatter(snow[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# ax4.scatter(snow[idx][j],c[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='o')
# elif np.logical_and(rdn>59,rdn<=151):
# # MAM
# ax1.scatter(c[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# ax2.scatter(s[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# ax3.scatter(snow[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# ax4.scatter(snow[idx][j],c[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='^')
# elif np.logical_and(rdn>151,rdn<=243):
# # JJA
# ax1.scatter(c[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# ax2.scatter(s[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# ax3.scatter(snow[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# ax4.scatter(snow[idx][j],c[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='s')
# elif np.logical_and(rdn>243,rdn<=334):
# # SON
# ax1.scatter(c[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax2.scatter(s[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax3.scatter(snow[idx][j],dp[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax4.scatter(snow[idx][j],c[idx][j],s=mz[idx][j]*25,c=color_list[i],edgecolors='k',marker='D')
# ax1.set_ylabel('Difference in Peaks [days]')
# ax2.set_ylabel('Difference in Peaks [days]')
# ax3.set_ylabel('Difference in Peaks [days]')
# ax4.set_ylabel('Shape')
# ax1.set_xlabel('Shape')
# ax2.set_xlabel('Scale')
# ax3.set_xlabel('Seasonal Snow STD')
# ax4.set_xlabel('Seasonal Snow STD')
| 45.918809 | 135 | 0.605676 | 6,238 | 33,934 | 3.186919 | 0.044726 | 0.072435 | 0.078471 | 0.070825 | 0.898793 | 0.88164 | 0.85674 | 0.841499 | 0.831992 | 0.828571 | 0 | 0.060085 | 0.158867 | 33,934 | 738 | 136 | 45.98103 | 0.636408 | 0.404373 | 0 | 0.582888 | 0 | 0 | 0.10077 | 0.037833 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008021 | 0 | 0.008021 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d16d9c9b0f5a9573e11d561f1786fe8e61fddab5 | 141 | py | Python | Ad-Hoc/2455.py | LorranSutter/URI-Online-Judge | aef885b9a7caa83484cf172e29eea8ec92fc3627 | [
"MIT"
] | null | null | null | Ad-Hoc/2455.py | LorranSutter/URI-Online-Judge | aef885b9a7caa83484cf172e29eea8ec92fc3627 | [
"MIT"
] | null | null | null | Ad-Hoc/2455.py | LorranSutter/URI-Online-Judge | aef885b9a7caa83484cf172e29eea8ec92fc3627 | [
"MIT"
] | null | null | null | P1, C1, P2, C2 = map(int,input().split())
left, right = P1*C1, P2*C2
if left == right: print(0)
elif left > right: print(-1)
else: print(1) | 20.142857 | 41 | 0.617021 | 27 | 141 | 3.222222 | 0.592593 | 0.310345 | 0.137931 | 0.183908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094017 | 0.170213 | 141 | 7 | 42 | 20.142857 | 0.649573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.6 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0f1263410311e27e06c6b1e81b7ea8e8ba95307e | 4,040 | py | Python | unnoise/moyenneurV2.py | Krown0s/TraitementsImages | 6d0a101c80a50abc42b3208504e8217042440b43 | [
"MIT"
] | null | null | null | unnoise/moyenneurV2.py | Krown0s/TraitementsImages | 6d0a101c80a50abc42b3208504e8217042440b43 | [
"MIT"
] | null | null | null | unnoise/moyenneurV2.py | Krown0s/TraitementsImages | 6d0a101c80a50abc42b3208504e8217042440b43 | [
"MIT"
] | null | null | null |
# -*- encoding: utf-8 -*-
from copy import deepcopy
from numpy import *
"""
Débruitabe par filtrage moyenneur sur les pixels noirs et blancs sur une matrice de 5×5
<image> l'image à débruiter
retourne l'image débruitée
"""
def moyenneur(image):
newimg = deepcopy(image)
for x in range(newimg.shape[0]):
for y in range(newimg.shape[1]):
if newimg[x][y] == 0 or newimg[x][y] == 1:
newimg[x][y] = moyennePixel(newimg, x, y)
return newimg
# Calcule la moyenne des pixels d'une image
def moyennePixel(image, x, y):
values = zeros(26, float)
if image.shape[0] - 1 >= x - 1 >= image.shape[0] - 1 >= 0 and image.shape[1] - 1 >= y - 1 >= 0:
values[1] = image[x - 1][y - 1]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x - 1 >= 0:
values[2] = image[x - 1][y]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x - 1 >= 0 and image.shape[1] - 1 >= y + 1 >= 0:
values[3] = image[x - 1][y + 1]
values[0] = values[0] + 1
if image.shape[1] - 1 >= y - 1 >= 0:
values[4] = image[x][y - 1]
values[0] = values[0] + 1
if image.shape[1] - 1 >= y + 1 >= 0:
values[5] = image[x][y + 1]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 1 >= 0 and image.shape[1] - 1 >= y - 1 >= 0:
values[6] = image[x + 1][y - 1]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 1 >= 0:
values[7] = image[x + 1][y]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 1 >= 0 and image.shape[1] - 1 >= y + 1 >= 0:
values[8] = image[x + 1][y + 1]
values[0] = values[0] + 1
# Version 2
if image.shape[0] - 1 >= x - 2 >= 0 and image.shape[1] - 1 >= y - 2 >= 0:
values[10] = image[x - 2][y - 2]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x - 2 >= 0 and image.shape[1] - 1 >= y - 1 >= 0:
values[11] = image[x - 2][y - 1]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x - 2 >= 0:
values[12] = image[x - 2][y]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x - 2 >= 0 and image.shape[1] - 1 >= y + 1 >= 0:
values[13] = image[x - 2][y + 1]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x - 2 >= 0 and image.shape[1] - 1 >= y + 2 >= 0:
values[14] = image[x - 2][y + 2]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x - 1 >= 0 and image.shape[1] - 1 >= y - 2 >= 0:
values[15] = image[x - 1][y - 2]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x - 1 >= 0 and image.shape[1] - 1 >= y + 2 >= 0:
values[16] = image[x - 1][y - 2]
values[0] = values[0]+ 1
if image.shape[1] - 1 >= y - 2 >= 0:
values[17] = image[x][y - 2]
values[0] = values[0] + 1
if image.shape[1] - 1 >= y + 2 >= 0:
values[18] = image[x][y + 2]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 1 >= 0 and image.shape[1] >= y - 2 >= 0:
values[19] = image[x][y - 2]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 1 >= 0 and image.shape[1] - 1 >= y + 2 >= 0:
values[20] = image[x + 1][y + 2]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 2 >= 0 and image.shape[1] - 1 >= y - 2 >= 0:
values[21] = image[x + 2][y - 2]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 2 >= 0 and image.shape[1] - 1 >= y - 1 >= 0:
values[22] = image[x + 2][y - 1]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 2 >= 0:
values[23] = image[x + 2][y]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 2 >= 0 and image.shape[1] - 1 >= y + 1 >= 0:
values[24] = image[x + 2][y + 1]
values[0] = values[0] + 1
if image.shape[0] - 1 >= x + 1 >= 0 and image.shape[1] - 1 >= y + 2 >= 0:
values[25] = image[x + 1][y + 2]
values[0] = values[0] + 1
moyenne = (sum(values) - values[0]) / values[0]
return moyenne | 39.607843 | 99 | 0.468317 | 705 | 4,040 | 2.685106 | 0.107801 | 0.184892 | 0.171685 | 0.184892 | 0.743265 | 0.73851 | 0.73851 | 0.730058 | 0.730058 | 0.730058 | 0 | 0.119293 | 0.327723 | 4,040 | 102 | 100 | 39.607843 | 0.57732 | 0.018564 | 0 | 0.305882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023529 | false | 0 | 0.023529 | 0 | 0.070588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7e2ecac7c74a5f5d724116b08093afd238a4077d | 80 | py | Python | sampleSystems/__init__.py | udanzo-p/quantiacs-python | cf698968a572a35bd884b12fef3cb407e4cfda8f | [
"MIT"
] | 246 | 2016-09-04T14:29:16.000Z | 2021-02-24T13:54:07.000Z | sampleSystems/__init__.py | udanzo-p/quantiacs-python | cf698968a572a35bd884b12fef3cb407e4cfda8f | [
"MIT"
] | 7 | 2017-03-22T14:18:44.000Z | 2020-10-20T20:04:51.000Z | sampleSystems/__init__.py | udanzo-p/quantiacs-python | cf698968a572a35bd884b12fef3cb407e4cfda8f | [
"MIT"
] | 122 | 2016-12-01T11:39:34.000Z | 2021-02-21T11:12:19.000Z | from . import meanReversion
from . import trendFollowing
from . import simpleTS
| 20 | 28 | 0.8125 | 9 | 80 | 7.222222 | 0.555556 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 80 | 3 | 29 | 26.666667 | 0.955882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7e5a2f7ddf3890a6a9d2a2f7af81e7baf5699479 | 656 | py | Python | aoc_cqkh42/year_2017/day_05.py | cqkh42/advent-of-code | bcf31cf8973a5b6d67492c412dce10df742e04d1 | [
"MIT"
] | null | null | null | aoc_cqkh42/year_2017/day_05.py | cqkh42/advent-of-code | bcf31cf8973a5b6d67492c412dce10df742e04d1 | [
"MIT"
] | null | null | null | aoc_cqkh42/year_2017/day_05.py | cqkh42/advent-of-code | bcf31cf8973a5b6d67492c412dce10df742e04d1 | [
"MIT"
] | null | null | null | import itertools
def part_a(data):
jumps = [int(num) for num in data.split('\n')]
index = 0
for step in itertools.count(0):
try:
new_index = index + jumps[index]
except IndexError:
return step
else:
jumps[index] += 1
index = new_index
def part_b(data, **_):
jumps = [int(num) for num in data.split('\n')]
index = 0
for step in itertools.count(0):
try:
new_index = index + jumps[index]
except IndexError:
return step
else:
jumps[index] += (-1) ** (jumps[index] >= 3)
index = new_index
| 23.428571 | 55 | 0.510671 | 81 | 656 | 4.049383 | 0.320988 | 0.152439 | 0.073171 | 0.091463 | 0.792683 | 0.792683 | 0.792683 | 0.792683 | 0.792683 | 0.792683 | 0 | 0.017032 | 0.373476 | 656 | 27 | 56 | 24.296296 | 0.781022 | 0 | 0 | 0.782609 | 0 | 0 | 0.006098 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.043478 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7e69c67bae6259e498005bd80b46d81d7683889f | 28 | py | Python | action_cwt/__init__.py | Fumipo-Theta/action_cwt | e0d747138e0201bf69716f6ab068d2f62f97d846 | [
"BSD-2-Clause"
] | null | null | null | action_cwt/__init__.py | Fumipo-Theta/action_cwt | e0d747138e0201bf69716f6ab068d2f62f97d846 | [
"BSD-2-Clause"
] | null | null | null | action_cwt/__init__.py | Fumipo-Theta/action_cwt | e0d747138e0201bf69716f6ab068d2f62f97d846 | [
"BSD-2-Clause"
] | null | null | null | from .action_cwt import CWT
| 14 | 27 | 0.821429 | 5 | 28 | 4.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7e8d51a75f21913d6f12876391877fd32a019fa5 | 4,052 | py | Python | tests/test_loader_initialization.py | peonone/itemloaders | 228dd499d3bace1c604d5eee195c7a29b595f5b5 | [
"BSD-3-Clause"
] | 31 | 2020-05-05T14:19:36.000Z | 2021-12-18T01:54:39.000Z | tests/test_loader_initialization.py | peonone/itemloaders | 228dd499d3bace1c604d5eee195c7a29b595f5b5 | [
"BSD-3-Clause"
] | 46 | 2020-05-08T11:38:39.000Z | 2022-03-18T16:26:08.000Z | tests/test_loader_initialization.py | peonone/itemloaders | 228dd499d3bace1c604d5eee195c7a29b595f5b5 | [
"BSD-3-Clause"
] | 10 | 2020-07-12T12:41:35.000Z | 2021-06-14T08:10:38.000Z | import unittest
from itemloaders import ItemLoader
class InitializationTestMixin:
item_class = None
def test_keep_single_value(self):
"""Loaded item should contain values from the initial item"""
input_item = self.item_class(name='foo')
il = ItemLoader(item=input_item)
loaded_item = il.load_item()
self.assertIsInstance(loaded_item, self.item_class)
self.assertEqual(dict(loaded_item), {'name': ['foo']})
def test_keep_list(self):
"""Loaded item should contain values from the initial item"""
input_item = self.item_class(name=['foo', 'bar'])
il = ItemLoader(item=input_item)
loaded_item = il.load_item()
self.assertIsInstance(loaded_item, self.item_class)
self.assertEqual(dict(loaded_item), {'name': ['foo', 'bar']})
def test_add_value_singlevalue_singlevalue(self):
"""Values added after initialization should be appended"""
input_item = self.item_class(name='foo')
il = ItemLoader(item=input_item)
il.add_value('name', 'bar')
loaded_item = il.load_item()
self.assertIsInstance(loaded_item, self.item_class)
self.assertEqual(dict(loaded_item), {'name': ['foo', 'bar']})
def test_add_value_singlevalue_list(self):
"""Values added after initialization should be appended"""
input_item = self.item_class(name='foo')
il = ItemLoader(item=input_item)
il.add_value('name', ['item', 'loader'])
loaded_item = il.load_item()
self.assertIsInstance(loaded_item, self.item_class)
self.assertEqual(dict(loaded_item), {'name': ['foo', 'item', 'loader']})
def test_add_value_list_singlevalue(self):
"""Values added after initialization should be appended"""
input_item = self.item_class(name=['foo', 'bar'])
il = ItemLoader(item=input_item)
il.add_value('name', 'qwerty')
loaded_item = il.load_item()
self.assertIsInstance(loaded_item, self.item_class)
self.assertEqual(dict(loaded_item), {'name': ['foo', 'bar', 'qwerty']})
def test_add_value_list_list(self):
"""Values added after initialization should be appended"""
input_item = self.item_class(name=['foo', 'bar'])
il = ItemLoader(item=input_item)
il.add_value('name', ['item', 'loader'])
loaded_item = il.load_item()
self.assertIsInstance(loaded_item, self.item_class)
self.assertEqual(dict(loaded_item), {'name': ['foo', 'bar', 'item', 'loader']})
def test_get_output_value_singlevalue(self):
"""Getting output value must not remove value from item"""
input_item = self.item_class(name='foo')
il = ItemLoader(item=input_item)
self.assertEqual(il.get_output_value('name'), ['foo'])
loaded_item = il.load_item()
self.assertIsInstance(loaded_item, self.item_class)
self.assertEqual(loaded_item, dict({'name': ['foo']}))
def test_get_output_value_list(self):
"""Getting output value must not remove value from item"""
input_item = self.item_class(name=['foo', 'bar'])
il = ItemLoader(item=input_item)
self.assertEqual(il.get_output_value('name'), ['foo', 'bar'])
loaded_item = il.load_item()
self.assertIsInstance(loaded_item, self.item_class)
self.assertEqual(loaded_item, dict({'name': ['foo', 'bar']}))
def test_values_single(self):
"""Values from initial item must be added to loader._values"""
input_item = self.item_class(name='foo')
il = ItemLoader(item=input_item)
self.assertEqual(il._values.get('name'), ['foo'])
def test_values_list(self):
"""Values from initial item must be added to loader._values"""
input_item = self.item_class(name=['foo', 'bar'])
il = ItemLoader(item=input_item)
self.assertEqual(il._values.get('name'), ['foo', 'bar'])
class InitializationFromDictTest(InitializationTestMixin, unittest.TestCase):
item_class = dict
| 42.652632 | 87 | 0.655232 | 507 | 4,052 | 5.005917 | 0.096647 | 0.094563 | 0.085106 | 0.120567 | 0.887707 | 0.852246 | 0.852246 | 0.852246 | 0.852246 | 0.852246 | 0 | 0 | 0.206811 | 4,052 | 94 | 88 | 43.106383 | 0.78967 | 0.134008 | 0 | 0.588235 | 0 | 0 | 0.06391 | 0 | 0 | 0 | 0 | 0 | 0.294118 | 1 | 0.147059 | false | 0 | 0.029412 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0e234bda73def0f6db57c75938e52b4c1fb145ad | 48 | py | Python | microblog.py | hao-beixi/microblog | 31aebf9a5eeb311113721553c26e1105bcf267e8 | [
"MIT"
] | null | null | null | microblog.py | hao-beixi/microblog | 31aebf9a5eeb311113721553c26e1105bcf267e8 | [
"MIT"
] | null | null | null | microblog.py | hao-beixi/microblog | 31aebf9a5eeb311113721553c26e1105bcf267e8 | [
"MIT"
] | null | null | null | # Main application module
from app import app | 16 | 26 | 0.770833 | 7 | 48 | 5.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 48 | 3 | 27 | 16 | 0.973684 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e65c30be74f364e434cc06fed9b61ba3fa99b69 | 3,809 | py | Python | funcFont.py | Lyle-zhang/kinetic_schemes | dc572bd1eedfddb871767573724cadddc57db76d | [
"MIT"
] | 1 | 2021-12-27T11:14:58.000Z | 2021-12-27T11:14:58.000Z | funcFont.py | Lyle-zhang/kinetic_schemes | dc572bd1eedfddb871767573724cadddc57db76d | [
"MIT"
] | null | null | null | funcFont.py | Lyle-zhang/kinetic_schemes | dc572bd1eedfddb871767573724cadddc57db76d | [
"MIT"
] | 1 | 2021-08-14T13:40:24.000Z | 2021-08-14T13:40:24.000Z | """
Function based on Font 1990 kinetic reaction scheme for biomass pyrolysis.
Reactions evaluated at some temperature.
Functions:
font1 - fluidized bed kinetics
font2 - pyroprobe kinetics
Reference:
Font, Marcilla, Verdu, Devesa, 1990. Ind. Eng. Chem. Res., 29, pp.1846-1855.
"""
# Modules
# -----------------------------------------------------------------------------
import numpy as np
# Function - primary reactions from fluidized bed
# -----------------------------------------------------------------------------
def font1(rhow, T, dt, nt):
"""
rhow = wood density, kg/m^3
T = temperature, K
dt = time step, s
nt = total number of time steps
"""
# vector for initial wood concentration, kg/m^3
pw = np.ones(nt)*rhow
# vectors to store product concentrations, kg/m^3
pg = np.zeros(nt) # gas
pt = np.zeros(nt) # tar
pc = np.zeros(nt) # char
R = 0.008314 # universal gas constant, kJ/mol*K
# A = pre-factor (1/s) and E = activation energy (kJ/mol)
A1 = 6.80e8; E1 = 156 # wood -> gas
A2 = 8.23e8; E2 = 148 # wood -> tar
A3 = 2.91e2; E3 = 61 # wood -> char
# reaction rate constant for each reaction, 1/s
K1 = A1 * np.exp(-E1 / (R * T)) # wood -> gas
K2 = A2 * np.exp(-E2 / (R * T)) # wood -> tar
K3 = A3 * np.exp(-E3 / (R * T)) # wood -> char
# concentrations at each time step for each product, kg/m^3
# reaction rate as r, rho/s
# concentration as density p, kg/m^3
for i in range(1, nt):
rww = -(K1+K2+K3) * pw[i-1] # wood rate
rwg = K1 * pw[i-1] # wood -> gas rate
rwt = K2 * pw[i-1] # wood -> tar rate
rwc = K3 * pw[i-1] # wood -> char rate
pw[i] = pw[i-1] + rww*dt # wood
pg[i] = pg[i-1] + rwg*dt # gas
pt[i] = pt[i-1] + rwt*dt # tar
pc[i] = pc[i-1] + rwc*dt # char
# return the wood, char, gas, tar concentrations as a density, kg/m^3
return pw, pg, pt, pc
# Function - primary reactions from pyroprobe 100
# -----------------------------------------------------------------------------
def font2(rhow, T, dt, nt):
"""
rhow = wood density, kg/m^3
T = temperature, K
dt = time step, s
nt = total number of time steps
"""
# vector for initial wood concentration, kg/m^3
pw = np.ones(nt)*rhow
# vectors to store product concentrations, kg/m^3
pg = np.zeros(nt) # gas
pt = np.zeros(nt) # tar
pc = np.zeros(nt) # char
R = 0.008314 # universal gas constant, kJ/mol*K
# A = pre-factor (1/s) and E = activation energy (kJ/mol)
A1 = 1.52e7; E1 = 139 # wood -> gas
A2 = 5.85e6; E2 = 119 # wood -> tar
A3 = 2.98e3; E3 = 73 # wood -> char
# reaction rate constant for each reaction, 1/s
K1 = A1 * np.exp(-E1 / (R * T)) # wood -> gas
K2 = A2 * np.exp(-E2 / (R * T)) # wood -> tar
K3 = A3 * np.exp(-E3 / (R * T)) # wood -> char
# concentrations at each time step for each product, kg/m^3
# reaction rate as r, rho/s
# concentration as density p, kg/m^3
for i in range(1, nt):
rww = -(K1+K2+K3) * pw[i-1] # wood rate
rwg = K1 * pw[i-1] # wood -> gas rate
rwt = K2 * pw[i-1] # wood -> tar rate
rwc = K3 * pw[i-1] # wood -> char rate
pw[i] = pw[i-1] + rww*dt # wood
pg[i] = pg[i-1] + rwg*dt # gas
pt[i] = pt[i-1] + rwt*dt # tar
pc[i] = pc[i-1] + rwc*dt # char
# return the wood, char, gas, tar concentrations as a density, kg/m^3
return pw, pg, pt, pc
| 34.008929 | 79 | 0.481229 | 555 | 3,809 | 3.302703 | 0.237838 | 0.017458 | 0.026187 | 0.034915 | 0.767049 | 0.767049 | 0.767049 | 0.767049 | 0.767049 | 0.767049 | 0 | 0.06088 | 0.331583 | 3,809 | 112 | 80 | 34.008929 | 0.659073 | 0.518246 | 0 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.022222 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0e8374f70d1ac88d7c16515d3f9986499d4308f6 | 20,767 | py | Python | sdk/search/azure-search-documents/tests/async_tests/test_service_live_async.py | arrownj/azure-sdk-for-python | b27939483a91d5171e08b2998ed779b1f4f7dcb0 | [
"MIT"
] | null | null | null | sdk/search/azure-search-documents/tests/async_tests/test_service_live_async.py | arrownj/azure-sdk-for-python | b27939483a91d5171e08b2998ed779b1f4f7dcb0 | [
"MIT"
] | null | null | null | sdk/search/azure-search-documents/tests/async_tests/test_service_live_async.py | arrownj/azure-sdk-for-python | b27939483a91d5171e08b2998ed779b1f4f7dcb0 | [
"MIT"
] | null | null | null | # -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import asyncio
import functools
import json
from os.path import dirname, join, realpath
import time
import pytest
from azure.core.credentials import AzureKeyCredential
from devtools_testutils import AzureMgmtTestCase, ResourceGroupPreparer
from search_service_preparer import SearchServicePreparer
from azure_devtools.scenario_tests.utilities import trim_kwargs_from_test_function
from azure.core.exceptions import HttpResponseError
from azure.search.documents import(
AnalyzeRequest,
AnalyzeResult,
CorsOptions,
EntityRecognitionSkill,
Field,
Index,
InputFieldMappingEntry,
OutputFieldMappingEntry,
SearchServiceClient,
ScoringProfile,
Skillset,
DataSourceCredentials,
DataSource,
DataContainer
)
from azure.search.documents.aio import SearchServiceClient
CWD = dirname(realpath(__file__))
SCHEMA = open(join(CWD, "..", "hotel_schema.json")).read()
BATCH = json.load(open(join(CWD, "..", "hotel_small.json"), encoding='utf-8'))
TIME_TO_SLEEP = 5
CONNECTION_STRING = 'DefaultEndpointsProtocol=https;AccountName=storagename;AccountKey=NzhL3hKZbJBuJ2484dPTR+xF30kYaWSSCbs2BzLgVVI1woqeST/1IgqaLm6QAOTxtGvxctSNbIR/1hW8yH+bJg==;EndpointSuffix=core.windows.net'
def await_prepared_test(test_fn):
"""Synchronous wrapper for async test methods. Used to avoid making changes
upstream to AbstractPreparer (which doesn't await the functions it wraps)
"""
@functools.wraps(test_fn)
def run(test_class_instance, *args, **kwargs):
trim_kwargs_from_test_function(test_fn, kwargs)
loop = asyncio.get_event_loop()
return loop.run_until_complete(test_fn(test_class_instance, **kwargs))
return run
class SearchClientTest(AzureMgmtTestCase):
def _create_datasource(self, name="sample-datasource"):
credentials = DataSourceCredentials(connection_string=CONNECTION_STRING)
container = DataContainer(name='searchcontainer')
data_source = DataSource(
name=name,
type="azureblob",
credentials=credentials,
container=container
)
return data_source
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer()
async def test_get_service_statistics(self, api_key, endpoint, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
result = await client.get_service_statistics()
assert isinstance(result, dict)
assert set(result.keys()) == {"counters", "limits"}
# Index operations
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer()
async def test_get_indexes_empty(self, api_key, endpoint, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
result = await client.get_indexes()
assert len(result) == 0
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_get_indexes(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
result = await client.get_indexes()
assert len(result) == 1
assert result[0].name == index_name
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_get_index(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
result = await client.get_index(index_name)
assert result.name == index_name
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_get_index_statistics(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
result = await client.get_index_statistics(index_name)
assert set(result.keys()) == {'document_count', 'storage_size'}
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_delete_indexes(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
await client.delete_index(index_name)
import time
if self.is_live:
time.sleep(TIME_TO_SLEEP)
result = await client.get_indexes()
assert len(result) == 0
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_create_index(self, api_key, endpoint, index_name, **kwargs):
name = "hotels"
fields = [
{
"name": "hotelId",
"type": "Edm.String",
"key": True,
"searchable": False
},
{
"name": "baseRate",
"type": "Edm.Double"
}]
scoring_profile = ScoringProfile(
name="MyProfile"
)
scoring_profiles = []
scoring_profiles.append(scoring_profile)
cors_options = CorsOptions(allowed_origins=["*"], max_age_in_seconds=60)
index = Index(
name=name,
fields=fields,
scoring_profiles=scoring_profiles,
cors_options=cors_options)
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
result = await client.create_index(index)
assert result.name == "hotels"
assert result.scoring_profiles[0].name == scoring_profile.name
assert result.cors_options.allowed_origins == cors_options.allowed_origins
assert result.cors_options.max_age_in_seconds == cors_options.max_age_in_seconds
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_create_or_update_index(self, api_key, endpoint, index_name, **kwargs):
name = "hotels"
fields = [
{
"name": "hotelId",
"type": "Edm.String",
"key": True,
"searchable": False
},
{
"name": "baseRate",
"type": "Edm.Double"
}]
cors_options = CorsOptions(allowed_origins=["*"], max_age_in_seconds=60)
scoring_profiles = []
index = Index(
name=name,
fields=fields,
scoring_profiles=scoring_profiles,
cors_options=cors_options)
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
result = await client.create_or_update_index(index_name=index.name, index=index)
assert len(result.scoring_profiles) == 0
assert result.cors_options.allowed_origins == cors_options.allowed_origins
assert result.cors_options.max_age_in_seconds == cors_options.max_age_in_seconds
scoring_profile = ScoringProfile(
name="MyProfile"
)
scoring_profiles = []
scoring_profiles.append(scoring_profile)
index = Index(
name=name,
fields=fields,
scoring_profiles=scoring_profiles,
cors_options=cors_options)
result = await client.create_or_update_index(index_name=index.name, index=index)
assert result.scoring_profiles[0].name == scoring_profile.name
assert result.cors_options.allowed_origins == cors_options.allowed_origins
assert result.cors_options.max_age_in_seconds == cors_options.max_age_in_seconds
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_analyze_text(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
analyze_request = AnalyzeRequest(text="One's <two/>", analyzer="standard.lucene")
result = await client.analyze_text(index_name, analyze_request)
assert len(result.tokens) == 2
# Synonym Map operations
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_create_synonym_map(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
result = await client.create_synonym_map("test-syn-map", [
"USA, United States, United States of America",
"Washington, Wash. => WA",
])
assert isinstance(result, dict)
assert result["name"] == "test-syn-map"
assert result["synonyms"] == [
"USA, United States, United States of America",
"Washington, Wash. => WA",
]
assert len(await client.get_synonym_maps()) == 1
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_delete_synonym_map(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
result = await client.create_synonym_map("test-syn-map", [
"USA, United States, United States of America",
"Washington, Wash. => WA",
])
assert len(await client.get_synonym_maps()) == 1
await client.delete_synonym_map("test-syn-map")
assert len(await client.get_synonym_maps()) == 0
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_get_synonym_map(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
await client.create_synonym_map("test-syn-map", [
"USA, United States, United States of America",
"Washington, Wash. => WA",
])
assert len(await client.get_synonym_maps()) == 1
result = await client.get_synonym_map("test-syn-map")
assert isinstance(result, dict)
assert result["name"] == "test-syn-map"
assert result["synonyms"] == [
"USA, United States, United States of America",
"Washington, Wash. => WA",
]
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_get_synonym_maps(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
await client.create_synonym_map("test-syn-map-1", [
"USA, United States, United States of America",
])
await client.create_synonym_map("test-syn-map-2", [
"Washington, Wash. => WA",
])
result = await client.get_synonym_maps()
assert isinstance(result, list)
assert all(isinstance(x, dict) for x in result)
assert set(x['name'] for x in result) == {"test-syn-map-1", "test-syn-map-2"}
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_create_or_update_synonym_map(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
await client.create_synonym_map("test-syn-map", [
"USA, United States, United States of America",
])
assert len(await client.get_synonym_maps()) == 1
await client.create_or_update_synonym_map("test-syn-map", [
"Washington, Wash. => WA",
])
assert len(await client.get_synonym_maps()) == 1
result = await client.get_synonym_map("test-syn-map")
assert isinstance(result, dict)
assert result["name"] == "test-syn-map"
assert result["synonyms"] == [
"Washington, Wash. => WA",
]
# Skillset operations
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_create_skillset(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
s = EntityRecognitionSkill(inputs=[InputFieldMappingEntry(name="text", source="/document/content")],
outputs=[OutputFieldMappingEntry(name="organizations", target_name="organizations")])
result = await client.create_skillset(name='test-ss', skills=[s], description="desc")
assert isinstance(result, Skillset)
assert result.name == "test-ss"
assert result.description == "desc"
assert result.e_tag
assert len(result.skills) == 1
assert isinstance(result.skills[0], EntityRecognitionSkill)
assert len(await client.get_skillsets()) == 1
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_delete_skillset(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
s = EntityRecognitionSkill(inputs=[InputFieldMappingEntry(name="text", source="/document/content")],
outputs=[OutputFieldMappingEntry(name="organizations", target_name="organizations")])
result = await client.create_skillset(name='test-ss', skills=[s], description="desc")
assert len(await client.get_skillsets()) == 1
await client.delete_skillset("test-ss")
if self.is_live:
time.sleep(TIME_TO_SLEEP)
assert len(await client.get_skillsets()) == 0
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_get_skillset(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
s = EntityRecognitionSkill(inputs=[InputFieldMappingEntry(name="text", source="/document/content")],
outputs=[OutputFieldMappingEntry(name="organizations", target_name="organizations")])
await client.create_skillset(name='test-ss', skills=[s], description="desc")
assert len(await client.get_skillsets()) == 1
result = await client.get_skillset("test-ss")
assert isinstance(result, Skillset)
assert result.name == "test-ss"
assert result.description == "desc"
assert result.e_tag
assert len(result.skills) == 1
assert isinstance(result.skills[0], EntityRecognitionSkill)
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_get_skillsets(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
s = EntityRecognitionSkill(inputs=[InputFieldMappingEntry(name="text", source="/document/content")],
outputs=[OutputFieldMappingEntry(name="organizations", target_name="organizations")])
await client.create_skillset(name='test-ss-1', skills=[s], description="desc1")
await client.create_skillset(name='test-ss-2', skills=[s], description="desc2")
result = await client.get_skillsets()
assert isinstance(result, list)
assert all(isinstance(x, Skillset) for x in result)
assert set(x.name for x in result) == {"test-ss-1", "test-ss-2"}
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_create_or_update_skillset(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
s = EntityRecognitionSkill(inputs=[InputFieldMappingEntry(name="text", source="/document/content")],
outputs=[OutputFieldMappingEntry(name="organizations", target_name="organizations")])
await client.create_or_update_skillset(name='test-ss', skills=[s], description="desc1")
await client.create_or_update_skillset(name='test-ss', skills=[s], description="desc2")
assert len(await client.get_skillsets()) == 1
result = await client.get_skillset("test-ss")
assert isinstance(result, Skillset)
assert result.name == "test-ss"
assert result.description == "desc2"
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_create_or_update_skillset_inplace(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
s = EntityRecognitionSkill(inputs=[InputFieldMappingEntry(name="text", source="/document/content")],
outputs=[OutputFieldMappingEntry(name="organizations", target_name="organizations")])
ss = await client.create_or_update_skillset(name='test-ss', skills=[s], description="desc1")
await client.create_or_update_skillset(name='test-ss', skills=[s], description="desc2", skillset=ss)
assert len(await client.get_skillsets()) == 1
result = await client.get_skillset("test-ss")
assert isinstance(result, Skillset)
assert result.name == "test-ss"
assert result.description == "desc2"
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_create_datasource_async(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
data_source = self._create_datasource()
result = await client.create_datasource(data_source)
assert result.name == "sample-datasource"
assert result.type == "azureblob"
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_delete_datasource_async(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
data_source = self._create_datasource()
result = await client.create_datasource(data_source)
assert len(await client.get_datasources()) == 1
await client.delete_datasource("sample-datasource")
assert len(await client.get_datasources()) == 0
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_get_datasource_async(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
data_source = self._create_datasource()
created = await client.create_datasource(data_source)
result = await client.get_datasource("sample-datasource")
assert result.name == "sample-datasource"
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_list_datasource_async(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
data_source1 = self._create_datasource()
data_source2 = self._create_datasource(name="another-sample")
created1 = await client.create_datasource(data_source1)
created2 = await client.create_datasource(data_source2)
result = await client.get_datasources()
assert isinstance(result, list)
assert set(x.name for x in result) == {"sample-datasource", "another-sample"}
@ResourceGroupPreparer(random_name_enabled=True)
@SearchServicePreparer(schema=SCHEMA, index_batch=BATCH)
async def test_create_or_update_datasource_async(self, api_key, endpoint, index_name, **kwargs):
client = SearchServiceClient(endpoint, AzureKeyCredential(api_key))
data_source = self._create_datasource()
created = await client.create_datasource(data_source)
assert len(await client.get_datasources()) == 1
data_source.description = "updated"
await client.create_or_update_datasource(data_source)
assert len(await client.get_datasources()) == 1
result = await client.get_datasource("sample-datasource")
assert result.name == "sample-datasource"
assert result.description == "updated"
| 47.521739 | 208 | 0.688207 | 2,222 | 20,767 | 6.214221 | 0.108461 | 0.050188 | 0.032445 | 0.068801 | 0.816411 | 0.795481 | 0.789253 | 0.77332 | 0.762094 | 0.757459 | 0 | 0.004164 | 0.202099 | 20,767 | 436 | 209 | 47.630734 | 0.829149 | 0.024414 | 0 | 0.648 | 0 | 0.002667 | 0.092647 | 0.009191 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.008 | false | 0 | 0.037333 | 0 | 0.056 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0eaf5aa359a4e653021f4a04a1b930ac7f43778c | 1,276 | py | Python | CodingInterview2/46_TranslateNumbersToStrings/test_translate_numbers_to_strings.py | hscspring/TheAlgorithms-Python | 5c2faea1d2d25a9a81a4786e053b0cc58ab46c6f | [
"MIT"
] | 10 | 2020-07-06T11:00:58.000Z | 2022-01-29T09:25:24.000Z | CodingInterview2/46_TranslateNumbersToStrings/test_translate_numbers_to_strings.py | hscspring/TheAlgorithms-Python | 5c2faea1d2d25a9a81a4786e053b0cc58ab46c6f | [
"MIT"
] | null | null | null | CodingInterview2/46_TranslateNumbersToStrings/test_translate_numbers_to_strings.py | hscspring/TheAlgorithms-Python | 5c2faea1d2d25a9a81a4786e053b0cc58ab46c6f | [
"MIT"
] | 3 | 2020-07-13T06:39:23.000Z | 2020-08-15T16:29:48.000Z | from translate_numbers_to_strings import get_translation_count1
from translate_numbers_to_strings import get_translation_count2
def test0():
assert get_translation_count1(0) == 1
assert get_translation_count2(0) == 1
def test10():
assert get_translation_count1(10) == 2
assert get_translation_count2(10) == 2
def test25():
assert get_translation_count1(25) == 2
assert get_translation_count2(25) == 2
def test26():
assert get_translation_count1(26) == 1
assert get_translation_count2(26) == 1
def test125():
assert get_translation_count1(125) == 3
assert get_translation_count2(125) == 3
def test126():
assert get_translation_count1(126) == 2
assert get_translation_count2(126) == 2
def test426():
assert get_translation_count1(426) == 1
assert get_translation_count2(426) == 1
def test100():
assert get_translation_count1(100) == 2
assert get_translation_count2(100) == 2
def test101():
assert get_translation_count1(101) == 2
assert get_translation_count2(101) == 2
def test12258():
assert get_translation_count1(12258) == 5
assert get_translation_count2(12258) == 5
def testm100():
assert get_translation_count1(-100) == 0
assert get_translation_count2(-100) == 0
| 22.385965 | 63 | 0.725705 | 172 | 1,276 | 5.069767 | 0.215116 | 0.385321 | 0.504587 | 0.327982 | 0.463303 | 0.112385 | 0.112385 | 0.112385 | 0 | 0 | 0 | 0.129647 | 0.1779 | 1,276 | 56 | 64 | 22.785714 | 0.701621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.628571 | 1 | 0.314286 | true | 0 | 0.057143 | 0 | 0.371429 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7edf205d630cecedaf8c2a2483135a4ee8cd6127 | 54 | py | Python | dags/_gen_fernetkey.py | nawinto99/airflow-workflow | efe1b7d35fef4535c18b19bcf686090346414eec | [
"MIT"
] | null | null | null | dags/_gen_fernetkey.py | nawinto99/airflow-workflow | efe1b7d35fef4535c18b19bcf686090346414eec | [
"MIT"
] | null | null | null | dags/_gen_fernetkey.py | nawinto99/airflow-workflow | efe1b7d35fef4535c18b19bcf686090346414eec | [
"MIT"
] | null | null | null | import sys
from pprint import pprint
pprint(sys.path)
| 13.5 | 25 | 0.814815 | 9 | 54 | 4.888889 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12963 | 54 | 3 | 26 | 18 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
7d0c0719ef647fae07b178abaac9cae066ed1208 | 11,749 | py | Python | test/python/BlockchainTest.py | teheperor/dvf-blockchain | 72c6e49e2901711b160cecc6f78d25d182977fbc | [
"MIT"
] | 1 | 2019-11-06T05:02:47.000Z | 2019-11-06T05:02:47.000Z | test/python/BlockchainTest.py | teheperor/dvf-blockchain | 72c6e49e2901711b160cecc6f78d25d182977fbc | [
"MIT"
] | 4 | 2021-05-10T01:51:50.000Z | 2022-01-22T08:51:13.000Z | test/python/BlockchainTest.py | teheperor/dvf-blockchain | 72c6e49e2901711b160cecc6f78d25d182977fbc | [
"MIT"
] | null | null | null | import json
import unittest
from urllib.parse import urlparse
import urllib.request
class BlockchainTest(unittest.TestCase):
def __init__(self, server1, server2):
super().__init__('run_test')
self.server1 = server1
self.server2 = server2
self.values = {}
def run_test(self):
tests = [
self.test_server1_chain_1st,
self.test_server1_mine_1st,
self.test_server1_chain_2nd,
self.test_server1_transactions_new_1st,
self.test_server1_mine_2nd,
self.test_server1_chain_3rd,
self.test_server2_chain_1st,
self.test_server2_mine_1st,
self.test_server2_chain_2nd,
self.test_server1_nodes_register_1st,
self.test_server1_nodes_resolve_1st,
self.test_server2_nodes_register_1st,
self.test_server2_nodes_resolve_1st,
]
for test in tests:
with self.subTest(test=test):
test()
def test_server1_chain_1st(self):
req = urllib.request.Request(f'{server1}/chain')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
block = obj['chain'][0]
self.assertEqual(block['index'], 1)
self.assertEqual(block['previous_hash'], '1')
self.assertEqual(block['proof'], 100)
self.assertEqual(len(block['transactions']), 0)
self.values['server1-block-1'] = block
def test_server1_mine_1st(self):
req = urllib.request.Request(f'{server1}/mine')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
self.assertEqual(obj['message'], 'New Block Forged')
transactions = obj['transactions']
self.assertEqual(len(transactions), 1)
transaction = transactions[0]
self.assertEqual(transaction['amount'], 1)
self.assertEqual(transaction['sender'], '0')
self.values['server1-mine-1'] = obj
self.values['server1-node-identifier'] = transaction['recipient']
def test_server1_chain_2nd(self):
req = urllib.request.Request(f'{server1}/chain')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
chain = obj['chain']
self.assertEqual(len(chain), 2)
self.assertEqual(chain[0], self.values['server1-block-1'])
block = obj['chain'][1]
mine = self.values['server1-mine-1']
self.assertEqual(block['index'], mine['index'])
self.assertEqual(block['previous_hash'], mine['previous_hash'])
self.assertEqual(block['proof'], mine['proof'])
self.assertEqual(block['transactions'], mine['transactions'])
self.values['server1-block-2'] = block
def test_server1_transactions_new_1st(self):
data = {
'sender': self.values['server1-node-identifier'],
'recipient': 'someone-other-address',
'amount': 5,
}
headers = { 'Content-Type': 'application/json' }
req = urllib.request.Request(
f'{server1}/transactions/new', json.dumps(data).encode(), headers)
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 201)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
self.assertEqual(obj['message'], 'Transaction will be added to Block 3')
def test_server1_mine_2nd(self):
req = urllib.request.Request(f'{server1}/mine')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
self.assertEqual(obj['message'], 'New Block Forged')
transactions = obj['transactions']
self.assertEqual(len(transactions), 2)
node = self.values['server1-node-identifier']
transaction = transactions[0]
self.assertEqual(transaction['amount'], 5)
self.assertEqual(transaction['recipient'], 'someone-other-address')
self.assertEqual(transaction['sender'], node)
transaction = transactions[1]
self.assertEqual(transaction['amount'], 1)
self.assertEqual(transaction['recipient'], node)
self.assertEqual(transaction['sender'], '0')
self.values['server1-mine-2'] = obj
def test_server1_chain_3rd(self):
req = urllib.request.Request(f'{server1}/chain')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
chain = obj['chain']
self.assertEqual(len(chain), 3)
self.assertEqual(chain[0], self.values['server1-block-1'])
self.assertEqual(chain[1], self.values['server1-block-2'])
block = obj['chain'][2]
mine = self.values['server1-mine-2']
self.assertEqual(block['index'], mine['index'])
self.assertEqual(block['previous_hash'], mine['previous_hash'])
self.assertEqual(block['proof'], mine['proof'])
self.assertEqual(block['transactions'], mine['transactions'])
self.values['server1-block-3'] = block
def test_server2_chain_1st(self):
req = urllib.request.Request(f'{server2}/chain')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
block = obj['chain'][0]
self.assertEqual(block['index'], 1)
self.assertEqual(block['previous_hash'], '1')
self.assertEqual(block['proof'], 100)
self.assertEqual(len(block['transactions']), 0)
self.values['server2-block-1'] = block
def test_server2_mine_1st(self):
req = urllib.request.Request(f'{server2}/mine')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
self.assertEqual(obj['message'], 'New Block Forged')
transactions = obj['transactions']
self.assertEqual(len(transactions), 1)
transaction = transactions[0]
self.assertEqual(transaction['amount'], 1)
self.assertEqual(transaction['sender'], '0')
self.values['server2-mine-1'] = obj
self.values['server2-node-identifier'] = transaction['recipient']
def test_server2_chain_2nd(self):
req = urllib.request.Request(f'{server2}/chain')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
chain = obj['chain']
self.assertEqual(len(chain), 2)
self.assertEqual(chain[0], self.values['server2-block-1'])
block = obj['chain'][1]
mine = self.values['server2-mine-1']
self.assertEqual(block['index'], mine['index'])
self.assertEqual(block['previous_hash'], mine['previous_hash'])
self.assertEqual(block['proof'], mine['proof'])
self.assertEqual(block['transactions'], mine['transactions'])
self.values['server2-block-2'] = block
def test_server1_nodes_register_1st(self):
data = { 'nodes': [ server2 ] }
headers = { 'Content-Type': 'application/json' }
req = urllib.request.Request(
f'{server1}/nodes/register', json.dumps(data).encode(), headers)
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 201)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
self.assertEqual(obj['message'], 'New nodes have been added')
nodes = obj['total_nodes']
self.assertEqual(len(nodes), 1)
self.assertEqual(nodes[0], urlparse(self.server2).netloc)
def test_server1_nodes_resolve_1st(self):
req = urllib.request.Request(f'{self.server1}/nodes/resolve')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
self.assertEqual(obj['message'], 'Our chain is authoritative')
chain = obj['chain']
self.assertEqual(len(chain), 3)
self.assertEqual(chain[0], self.values['server1-block-1'])
self.assertEqual(chain[1], self.values['server1-block-2'])
self.assertEqual(chain[2], self.values['server1-block-3'])
def test_server2_nodes_register_1st(self):
data = { 'nodes': [ self.server1 ] }
headers = { 'Content-Type': 'application/json' }
req = urllib.request.Request(
f'{server2}/nodes/register', json.dumps(data).encode(), headers)
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 201)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
self.assertEqual(obj['message'], 'New nodes have been added')
nodes = obj['total_nodes']
self.assertEqual(len(nodes), 1)
self.assertEqual(nodes[0], urlparse(self.server1).netloc)
def test_server2_nodes_resolve_1st(self):
req = urllib.request.Request(f'{self.server2}/nodes/resolve')
with urllib.request.urlopen(req) as res:
body = res.read()
self.assertEqual(res.status, 200)
self.assertIn('application/json', res.getheader('Content-Type'))
obj = json.loads(body)
self.assertEqual(obj['message'], 'Our chain was replaced')
chain = obj['new_chain']
self.assertEqual(len(chain), 3)
self.assertEqual(chain[0], self.values['server1-block-1'])
self.assertEqual(chain[1], self.values['server1-block-2'])
self.assertEqual(chain[2], self.values['server1-block-3'])
if __name__ == '__main__':
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('-s1', '--server1', default="http://localhost:5000", type=str)
parser.add_argument('-s2', '--server2', default="http://localhost:5001", type=str)
args = parser.parse_args()
server1 = args.server1
server2 = args.server2
suite = unittest.TestSuite()
suite.addTest(BlockchainTest(server1, server2))
unittest.TextTestRunner().run(suite)
| 38.521311 | 86 | 0.596306 | 1,300 | 11,749 | 5.296923 | 0.087692 | 0.159018 | 0.046907 | 0.043421 | 0.861894 | 0.801191 | 0.741069 | 0.726692 | 0.705925 | 0.685594 | 0 | 0.026636 | 0.265044 | 11,749 | 304 | 87 | 38.648026 | 0.770816 | 0 | 0 | 0.603376 | 0 | 0 | 0.167674 | 0.02247 | 0 | 0 | 0 | 0 | 0.362869 | 1 | 0.063291 | false | 0 | 0.021097 | 0 | 0.088608 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d200d44e3f98de9a716bc93d7bfb53ef3fdbc04 | 86 | py | Python | test/test_cli.py | nschloe/dedec | 1adf37e05dfb7e257b00bff3c4f1b39dfb700005 | [
"MIT"
] | 4 | 2021-03-10T20:40:29.000Z | 2022-03-24T02:56:34.000Z | test/test_cli.py | nschloe/decimal2rational | 1adf37e05dfb7e257b00bff3c4f1b39dfb700005 | [
"MIT"
] | 4 | 2016-07-31T15:00:49.000Z | 2017-04-07T09:58:59.000Z | test/test_cli.py | nschloe/dedec | 1adf37e05dfb7e257b00bff3c4f1b39dfb700005 | [
"MIT"
] | null | null | null | import identinum
def test_cli():
identinum.cli.main(["{:f}".format(3.0 / 7.0)])
| 14.333333 | 50 | 0.616279 | 14 | 86 | 3.714286 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054795 | 0.151163 | 86 | 5 | 51 | 17.2 | 0.657534 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7d327befa6ad8db3ed0e9284a1cd98237202a171 | 110 | py | Python | application/models/__init__.py | demetrius-mp/flask-template | 2dbab372bf2d7d5ff60af430c4b69c95a41cd681 | [
"MIT"
] | null | null | null | application/models/__init__.py | demetrius-mp/flask-template | 2dbab372bf2d7d5ff60af430c4b69c95a41cd681 | [
"MIT"
] | 2 | 2021-10-14T02:00:15.000Z | 2021-10-14T02:19:44.000Z | application/models/__init__.py | demetrius-mp/flask-template | 2dbab372bf2d7d5ff60af430c4b69c95a41cd681 | [
"MIT"
] | null | null | null | from application.models.user import User # noqa: F401
from application.models.role import Role # noqa: F401
| 36.666667 | 54 | 0.781818 | 16 | 110 | 5.375 | 0.5 | 0.348837 | 0.488372 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0.145455 | 110 | 2 | 55 | 55 | 0.851064 | 0.190909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
adc5ec98faf83c8872bd9e1f630c1141e42e929b | 96 | py | Python | bol/utils/__init__.py | harveenchadha/bol | 0f720813107ab2f41e895917cd0359e8c0738dd1 | [
"MIT"
] | 10 | 2021-07-09T12:27:27.000Z | 2022-03-23T07:36:53.000Z | bol/utils/__init__.py | harveenchadha/bol | 0f720813107ab2f41e895917cd0359e8c0738dd1 | [
"MIT"
] | 4 | 2021-07-05T19:18:32.000Z | 2021-09-09T07:18:23.000Z | bol/utils/__init__.py | harveenchadha/bol | 0f720813107ab2f41e895917cd0359e8c0738dd1 | [
"MIT"
] | 3 | 2021-08-05T06:34:31.000Z | 2022-03-30T13:22:47.000Z | from .helper_functions import *
from .constants import *
from .file_operations.file_ops import * | 32 | 39 | 0.8125 | 13 | 96 | 5.769231 | 0.615385 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114583 | 96 | 3 | 39 | 32 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc3b329d4d7cdd1cc18a6759498eb25f63bae230 | 46 | py | Python | share/gaffer/gui/mtlx_input_init.py | Sosoyan/materialXBox | 75fae5b42a9136f9646a4ed12d6f155f00e7bc1d | [
"BSD-3-Clause"
] | 1 | 2021-03-05T11:54:38.000Z | 2021-03-05T11:54:38.000Z | share/gaffer/gui/mtlx_input_init.py | Sosoyan/materialXBox | 75fae5b42a9136f9646a4ed12d6f155f00e7bc1d | [
"BSD-3-Clause"
] | null | null | null | share/gaffer/gui/mtlx_input_init.py | Sosoyan/materialXBox | 75fae5b42a9136f9646a4ed12d6f155f00e7bc1d | [
"BSD-3-Clause"
] | 1 | 2021-02-12T11:20:07.000Z | 2021-02-12T11:20:07.000Z | import mtlx_input
mtlx_input.init(application) | 23 | 28 | 0.891304 | 7 | 46 | 5.571429 | 0.714286 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 46 | 2 | 28 | 23 | 0.886364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
70c1eeb4222a28640319aff7ac6878c0e4d90e50 | 22,581 | py | Python | authors/apps/authentication/test/test_authentication.py | Kasulejoseph/ah-backend-athena | 016810d6a2391ae45985b4d43003e51ada1e81be | [
"BSD-3-Clause"
] | null | null | null | authors/apps/authentication/test/test_authentication.py | Kasulejoseph/ah-backend-athena | 016810d6a2391ae45985b4d43003e51ada1e81be | [
"BSD-3-Clause"
] | 31 | 2018-11-26T17:42:35.000Z | 2022-03-11T23:36:55.000Z | authors/apps/authentication/test/test_authentication.py | Kasulejoseph/ah-backend-athena | 016810d6a2391ae45985b4d43003e51ada1e81be | [
"BSD-3-Clause"
] | 6 | 2018-11-23T09:55:02.000Z | 2021-06-17T15:18:49.000Z | import json
from django.urls import reverse
from rest_framework.views import status
from rest_framework.test import APITestCase, APIClient, APIRequestFactory
from ..serializers import LoginSerializer
from rest_framework.exceptions import ValidationError
from ..views import VerifyAccount, RegistrationAPIView
from ..models import UserManager, User
from unittest.mock import patch, Mock, call
from ..social.google_token_validator import GoogleValidate
from ..social.facebook_token_validator import FacebookValidate
from ..social.twitter_token_validator import TwitterValidate
from ..views import GoogleAuthAPIView, FacebookAuthAPIView, TwitterAuthAPIView
from ..serializers import GoogleAuthSerializer
from google.auth.transport import requests
class TestUsers(APITestCase):
def setUp(self):
self.client = APIClient()
def generate_user(self, username='', email='', password=''):
user = {
'user': {
'email': email,
'username': username,
'password': password
}
}
return user
def verify_account(self, token, uidb64):
request = APIRequestFactory().get(
reverse(
"activate_account",
kwargs={
"token": token,
"uidb64": uidb64}))
verify_account = VerifyAccount.as_view()
response = verify_account(request, token=token, uidb64=uidb64)
return response
def create_user(self, username='', email='', password=''):
user = self.generate_user(username, email, password)
self.client.post('/api/users/', user, format='json')
return user
def test_user_registration(self):
user = self.generate_user(
'athena', 'athena@gmail.com', 'P1assword@user')
response = self.client.post('/api/users/', user, format='json')
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(
json.loads(
response.content), {
"user": {
"message": "A verification email has been sent to athena@gmail.com"}})
def test_cannot_login_without_verification(self):
self.create_user('athena', 'athena@gmail.com', 'P1assword@user')
login_details = self.generate_user(
'', 'athena@gmail.com', 'P1assword@user')
response = self.client.post(
'/api/users/login/', login_details, format='json')
self.assertEqual(
json.loads(
response.content), {
"errors": {
"error": ["Your email is not verified, Please check your email for a verification link"]}})
def test_user_registration_empty_details(self):
user = self.generate_user('', '', '')
response = self.client.post('/api/users/', user, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_user_registration_wrong_email_format(self):
user = self.generate_user('athena', 'athenmail', 'P1assword@user')
response = self.client.post('/api/users/', user, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_user_login(self):
self.create_user('athena', 'athena@gmail.com', '1Password@user')
login_details = self.generate_user(
'', 'athena@gmail.com', '1Password@user')
request = APIRequestFactory().post(
reverse("registration")
)
user = User.objects.get()
token, uidb64 = RegistrationAPIView.generate_activation_link(
user, request, send=False)
self.verify_account(token, uidb64)
response = self.client.post(
'/api/users/login/', login_details, format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
json.loads(response.content),
{"user": {
"email": "athena@gmail.com",
"username": "athena",
'token': response.data['token']
}
}
)
def test_unauthorized_access_to_authenticated_endpoint(self):
self.create_user('kasule', 'athena@gmail.com', 'Password@user1')
login_details = self.generate_user(
'', 'athena@gmail.com', 'Password@user1')
response = self.client.post(
'/api/user/', login_details, format='json')
self.assertTrue(response.status_code == 403)
self.assertEqual(
json.loads(response.content),
{"user": {
"detail": "Authentication credentials were not provided."
}
}
)
def test_user_with_valid_token_access_protected_endpoints(self):
self.create_user('soko', 'athena@gmail.com', 'Password@user1')
login_details = self.generate_user(
'', 'athena@gmail.com', 'Password@user1')
request = APIRequestFactory().post(
reverse("registration")
)
user = User.objects.get()
token, uidb64 = RegistrationAPIView.generate_activation_link(
user, request, send=False)
self.verify_account(token, uidb64)
response = self.client.post(
'/api/users/login/', login_details, format='json')
token = response.data['token']
self.client.credentials(HTTP_AUTHORIZATION='Bearer ' + token)
res = self.client.get(
'/api/user/', login_details, format='json')
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertEqual(
json.loads(res.content),
{"user": {
"email": "athena@gmail.com",
"username": "soko",
'token': res.data['token']
}
}
)
def test_invalid_token(self):
self.create_user('josh', 'athena@gmail.com', 'Password@user1')
login_details = self.generate_user(
'', 'athena@gmail.com', 'Password@user1')
self.client.credentials(HTTP_AUTHORIZATION='Bearer ' + '123hjhj12')
res = self.client.get(
'/api/user/', login_details, format='json')
self.assertTrue(res.status_code == 401)
self.assertEqual(
'Invalid token. please login again', res.data['detail'])
def test_login_jwt_with_bad_credentials(self):
self.create_user('kica', 'athena@gmail.com', 'Password@user11')
login_details = self.generate_user(
'', 'kica@gmail.com', 'Password@user11')
response = self.client.post(
'/api/users/login/', login_details, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertEqual(
{"errors": {
"error": [
"A user with this email and password was not found."]
}
},
json.loads(response.content))
def test_email_is_required(self):
data = {
"email": None,
"password": "Password1"
}
with self.assertRaises(ValidationError) as email_error:
LoginSerializer().validate(data)
exce = email_error.exception
self.assertIn('An email address is required to log in', str(exce))
def test_password_is_required(self):
data = {
"email": 'athena@gmail.com',
"password": None
}
with self.assertRaises(ValidationError) as pass_error:
LoginSerializer().validate(data)
exce = pass_error.exception
self.assertIn('A password is required to log in.', str(exce))
class TestSocialAuthUsers(APITestCase):
def setUp(self):
self.client = APIClient()
def save_user_to_db(self, username='', email='', password=''):
user = {
'user': {
'email': email,
'username': username,
'password': "45fdcgcWQjjhvnkb"
}
}
res = self.client.post('/api/users/', user, format='json')
def test_google_validate_token_is_called(self):
with patch('authors.apps.authentication.social.google_token_validator.id_token.verify_oauth2_token') as mock_google_validate:
GoogleValidate.validate_google_token('access token')
self.assertTrue(mock_google_validate.called)
def test_verify_google_auth_raises_exception_when_token_is_invalid(self):
with patch('authors.apps.authentication.social.google_token_validator.id_token.verify_oauth2_token') as mock_google_validate:
GoogleValidate.validate_google_token('token')
mock_google_validate.side_effect = ValueError
self.assertRaises(ValueError, mock_google_validate)
self.assertIsNone(GoogleValidate.validate_google_token('token'))
def test_google_validate_returns_correct_data_when_token_is_valid(self):
google_user_info_valid_response = {
"name": "andrew", "email": "andrew@a.com", "sub": "104383024388008549815"}
with patch('authors.apps.authentication.social.google_token_validator.GoogleValidate.validate_google_token') as mock_google_validate:
mock_google_validate.return_value = google_user_info_valid_response
self.assertEqual(mock_google_validate(
'VALID google token'), google_user_info_valid_response)
def test_google_validate_returns_none_when_token_is_invalid(self):
with patch('authors.apps.authentication.social.google_token_validator.GoogleValidate.validate_google_token') as mock_google_validate:
mock_google_validate.return_value = None
self.assertIsNone(mock_google_validate('INVALID google token'))
def test_google_login_valid_token(self):
with patch('authors.apps.authentication.social.google_token_validator.GoogleValidate.validate_google_token') as mock_google_validate:
mock_google_validate.return_value = {
"name": "andrew", "email": "andrew@a.com", "sub": "104383024388008549815"}
res = self.client.post(
'/api/users/google/', {"token": "valid token for google"}, format='json')
self.assertEqual(res.status_code, status.HTTP_200_OK,
"Response status should be 200 OK")
self.assertIn("jwt_token", json.loads(res.content)['user'])
def test_google_login_invalid_token(self):
with patch('authors.apps.authentication.social.google_token_validator.GoogleValidate.validate_google_token') as mock_google_validate:
mock_google_validate.return_value = None
res = self.client.post(
'/api/users/google/', {"token": "valid token for google"}, format='json')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST,
"Response status should be 400 BAD REQUEST")
self.assertEqual(json.loads(res.content), {"errors": {
"auth_token": ["Invalid token please try again"]}})
def test_google_login_missing_key_sub_should_return_error(self):
with patch('authors.apps.authentication.social.google_token_validator.GoogleValidate.validate_google_token') as mock_google_validate:
mock_google_validate.return_value = {
"name": "andrew", "email": "andrew@a.com", "some_other_thing": "104383024388008549815"}
res = self.client.post(
'/api/users/google/', {"token": "valid token for google"}, format='json')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST,
"Response status should be 400 BAD REQUEST")
self.assertEqual(json.loads(res.content), {"errors": {
"auth_token": ["Token is not valid or has expired. Please get a new one."]}})
def test_google_user_with_attached_email_already_exists_in_db(self):
self.save_user_to_db('andrew', 'andrew@a.com', '1P@ssword')
with patch('authors.apps.authentication.social.google_token_validator.GoogleValidate.validate_google_token') as mock_google_validate:
mock_google_validate.return_value = {
"name": "andrew", "email": "andrew@a.com", "sub": "104383024388008549815"}
res = self.client.post(
'/api/users/google/', {"token": "valid token for google"}, format='json')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST,
"Response status should be 400 BAD REQUEST")
self.assertEqual(json.loads(res.content), {"errors": {
"auth_token": ["Failed to register the user. Email already exists in the database"]}})
def test_facebook_validate_token_is_called(self):
with patch('authors.apps.authentication.social.facebook_token_validator.facebook.GraphAPI') as mock_facebook_validate:
FacebookValidate.validate_facebook_token('access token')
self.assertTrue(mock_facebook_validate.called)
mock_facebook_validate.assert_called_with(
access_token='access token', version='3.1')
def test_verify_facebook_auth_raises_exception_when_token_is_invalid(self):
with patch('authors.apps.authentication.social.facebook_token_validator.facebook.GraphAPI') as mock_facebook_validate:
FacebookValidate.validate_facebook_token('token')
mock_facebook_validate.side_effect = ValueError
self.assertRaises(ValueError, mock_facebook_validate)
self.assertIsNone(
FacebookValidate.validate_facebook_token('token'))
def test_facebook_validate_returns_correct_data_when_token_is_valid(self):
facebook_user_info_valid_response = {
"name": "andrew", "email": "andrew@a.com", "id": "104383024388008549815"}
with patch('authors.apps.authentication.social.facebook_token_validator.FacebookValidate.validate_facebook_token') as mock_facebook_validate:
mock_facebook_validate.return_value = facebook_user_info_valid_response
self.assertEqual(mock_facebook_validate(
'VALID facebook token'), facebook_user_info_valid_response)
def test_facebook_validate_returns_none_when_token_is_invalid(self):
with patch('authors.apps.authentication.social.facebook_token_validator.FacebookValidate.validate_facebook_token') as mock_facebook_validate:
mock_facebook_validate.return_value = None
self.assertIsNone(mock_facebook_validate('INVALID facebook token'))
def test_facebook_login_valid_token(self):
with patch('authors.apps.authentication.social.facebook_token_validator.FacebookValidate.validate_facebook_token') as mock_facebook_validate:
mock_facebook_validate.return_value = {
"name": "andrew", "email": "andrew@a.com", "id": "104383024388008549815"}
mock_facebook_validate('token')
res = self.client.post(
'/api/users/facebook/', {"token": "valid token for facebook"}, format='json')
self.assertEqual(res.status_code, status.HTTP_200_OK,
"Response status should be 200 OK")
self.assertIn("jwt_token", json.loads(res.content)['user'])
def test_facebook_login_invalid_token(self):
with patch('authors.apps.authentication.social.facebook_token_validator.FacebookValidate.validate_facebook_token') as mock_facebook_validate:
mock_facebook_validate.return_value = None
res = self.client.post(
'/api/users/facebook/', {"token": "invalid token for facebook"}, format='json')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST,
"Response status should be 400 BAD REQUEST")
self.assertEqual(json.loads(res.content), {"errors": {
"auth_token": ["Invalid token please try again"]}})
def test_facebook_login_missing_key_sub_should_return_error(self):
with patch('authors.apps.authentication.social.facebook_token_validator.FacebookValidate.validate_facebook_token') as mock_facebook_validate:
mock_facebook_validate.return_value = {
"name": "andrew", "email": "andrew@a.com", "some_other_thing": "104383024388008549815"}
res = self.client.post(
'/api/users/facebook/', {"token": "valid token for facebook"}, format='json')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST,
"Response status should be 400 BAD REQUEST")
self.assertEqual(json.loads(res.content), {"errors": {
"auth_token": ["Token is not valid or has expired. Please get a new one."]}})
def test_facebook_user_with_attached_email_already_exists_in_db(self):
self.save_user_to_db('andrew', 'andrew@a.com', 'P@ssword1')
with patch('authors.apps.authentication.social.facebook_token_validator.FacebookValidate.validate_facebook_token') as mock_facebook_validate:
mock_facebook_validate.return_value = {
"name": "andrew", "email": "andrew@a.com", "id": "104383024388008549815"}
res = self.client.post(
'/api/users/facebook/', {"token": "valid token for facebook"}, format='json')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST,
"Response status should be 400 BAD REQUEST")
self.assertEqual(json.loads(res.content), {"errors": {
"auth_token": ["Failed to register the user. Email already exists in the database"]}})
def test_twitter_validate_token_is_called(self):
with patch('authors.apps.authentication.social.twitter_token_validator.twitter.Api') as mock_twitter_validate:
TwitterValidate.validate_twitter_token('access token')
self.assertTrue(mock_twitter_validate.called)
def test_verify_twitter_auth_raises_exception_when_token_is_invalid(self):
with patch('authors.apps.authentication.social.twitter_token_validator.twitter.Api') as mock_twitter_validate:
TwitterValidate.validate_twitter_token('token1 token2')
mock_twitter_validate.side_effect = ValueError
self.assertRaises(ValueError, mock_twitter_validate)
self.assertIsNone(TwitterValidate.validate_twitter_token('token'))
def test_twitter_validate_returns_correct_data_when_token_is_valid(self):
twitter_user_info_valid_response = {
"name": "andrew", "email": "andrew@a.com", "id_str": "104383024388008549815"}
with patch('authors.apps.authentication.social.twitter_token_validator.TwitterValidate.validate_twitter_token') as mock_twitter_validate:
mock_twitter_validate.return_value = twitter_user_info_valid_response
self.assertEqual(mock_twitter_validate(
'VALID twitter token'), twitter_user_info_valid_response)
def test_twitter_validate_returns_none_when_token_is_invalid(self):
with patch('authors.apps.authentication.social.twitter_token_validator.TwitterValidate.validate_twitter_token') as mock_twitter_validate:
mock_twitter_validate.return_value = None
self.assertIsNone(mock_twitter_validate('INVALID twitter token'))
def test_twitter_login_valid_token(self):
with patch('authors.apps.authentication.social.twitter_token_validator.TwitterValidate.validate_twitter_token') as mock_twitter_validate:
mock_twitter_validate.return_value = {
"name": "andrew", "email": "andrew@a.com", "id_str": "104383024388008549815"}
mock_twitter_validate('token')
res = self.client.post(
'/api/users/twitter/', {"token": "valid token for twitter"}, format='json')
self.assertEqual(res.status_code, status.HTTP_200_OK,
"Response status should be 200 OK")
self.assertIn("jwt_token", json.loads(res.content)['user'])
def test_twitter_login_invalid_token(self):
with patch('authors.apps.authentication.social.twitter_token_validator.TwitterValidate.validate_twitter_token') as mock_twitter_validate:
mock_twitter_validate.return_value = None
res = self.client.post(
'/api/users/twitter/', {"token": "valid token for twitter"}, format='json')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST,
"Response status should be 400 BAD REQUEST")
self.assertEqual(json.loads(res.content), {"errors": {
"auth_token": ["Invalid token please try again"]}})
def test_twitter_login_missing_key_sub_should_return_error(self):
with patch('authors.apps.authentication.social.twitter_token_validator.TwitterValidate.validate_twitter_token') as mock_twitter_validate:
mock_twitter_validate.return_value = {
"name": "andrew", "email": "andrew@a.com", "some_other_thing": "104383024388008549815"}
res = self.client.post(
'/api/users/twitter/', {"token": "valid token for twitter"}, format='json')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST,
"Response status should be 400 BAD REQUEST")
self.assertEqual(json.loads(res.content), {"errors": {
"auth_token": ["Token is not valid or has expired. Please get a new one."]}})
def test_twitter_user_with_attached_email_already_exists_in_db(self):
self.save_user_to_db('andrew', 'andrew@a.com', 'P@ssword')
with patch('authors.apps.authentication.social.twitter_token_validator.TwitterValidate.validate_twitter_token') as mock_twitter_validate:
mock_twitter_validate.return_value = {
"name": "andrew", "email": "andrew@a.com", "id_str": "104383024388008549815"}
res = self.client.post(
'/api/users/twitter/', {"token": "valid token for twitter"}, format='json')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST,
"Response status should be 400 BAD REQUEST")
self.assertEqual(json.loads(res.content), {"errors": {
"auth_token": ["Failed to register the user. Email already exists in the database"]}})
| 53.509479 | 149 | 0.655463 | 2,483 | 22,581 | 5.696738 | 0.086589 | 0.039236 | 0.027147 | 0.033934 | 0.808978 | 0.768752 | 0.74217 | 0.711842 | 0.673029 | 0.656062 | 0 | 0.023101 | 0.237013 | 22,581 | 421 | 150 | 53.63658 | 0.797899 | 0 | 0 | 0.525469 | 0 | 0 | 0.274921 | 0.109118 | 0.002681 | 0 | 0 | 0 | 0.158177 | 1 | 0.10992 | false | 0.061662 | 0.040214 | 0 | 0.163539 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
cb02400133efa2bd7f9c3af9c6ac27372e23a50c | 137 | py | Python | pretrain/data/datasets/__init__.py | thilinicooray/VL-BERT | fef1e04557542733b4f7519d6288a4588ea5a040 | [
"MIT"
] | 5 | 2020-12-08T12:38:48.000Z | 2021-11-25T13:19:16.000Z | code/vl-bert/pretrain/data/datasets/__init__.py | e-bug/mpre-unmasked | cd12250b58152a558e15a33113bf98d90b88e776 | [
"MIT"
] | 1 | 2021-06-21T04:05:26.000Z | 2021-06-21T04:05:26.000Z | code/vl-bert/pretrain/data/datasets/__init__.py | e-bug/mpre-unmasked | cd12250b58152a558e15a33113bf98d90b88e776 | [
"MIT"
] | 1 | 2021-06-08T02:31:59.000Z | 2021-06-08T02:31:59.000Z | from .conceptual_captions import ConceptualCaptionsDataset
from .vcr_corpus import VCRCorpus
from .general_corpus import GeneralCorpus
| 22.833333 | 58 | 0.875912 | 15 | 137 | 7.8 | 0.666667 | 0.205128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10219 | 137 | 5 | 59 | 27.4 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb1813a1a8b3e780a324fb6c6a96b3e9f30bcbf1 | 7,067 | py | Python | torchrec/optim/tests/test_clipping.py | terrorizer1980/torchrec | 824efb76e4a1c8500e5ce976ac01e6bae894e03a | [
"BSD-3-Clause"
] | null | null | null | torchrec/optim/tests/test_clipping.py | terrorizer1980/torchrec | 824efb76e4a1c8500e5ce976ac01e6bae894e03a | [
"BSD-3-Clause"
] | null | null | null | torchrec/optim/tests/test_clipping.py | terrorizer1980/torchrec | 824efb76e4a1c8500e5ce976ac01e6bae894e03a | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import unittest
import torch
from torch.autograd import Variable
from torchrec.optim.clipping import GradientClippingOptimizer, GradientClipping
from torchrec.optim.tests.test_utils import DummyKeyedOptimizer
class TestGradientClippingOptimizer(unittest.TestCase):
def test_clip_all_gradients_norm(self) -> None:
# Clip all gradients to zero
param_1 = Variable(torch.tensor([1.0, 2.0]), requires_grad=True)
keyed_optimizer = DummyKeyedOptimizer(
{"param_1": param_1}, {}, [{"params": [param_1]}]
)
gradient_clipping_optimizer = GradientClippingOptimizer(
optimizer=keyed_optimizer, max_gradient=0.0, clipping=GradientClipping.NORM
)
gradient_clipping_optimizer.zero_grad()
param_1.grad = torch.tensor([1.0, 2.0])
gradient_clipping_optimizer.step()
self.assertTrue(torch.equal(param_1.grad, torch.tensor([0.0, 0.0])))
def test_clip_no_gradients_norm(self) -> None:
# gradients are too small to be clipped
param_1 = Variable(torch.tensor([1.0, 2.0]), requires_grad=True)
keyed_optimizer = DummyKeyedOptimizer(
{"param_1": param_1}, {}, [{"params": [param_1]}]
)
gradient_clipping_optimizer = GradientClippingOptimizer(
optimizer=keyed_optimizer, max_gradient=1.0, clipping=GradientClipping.NORM
)
gradient_clipping_optimizer.zero_grad()
param_1.grad = torch.tensor([0.5, 0.5])
gradient_clipping_optimizer.step()
self.assertTrue(torch.equal(param_1.grad, torch.tensor([0.5, 0.5])))
def test_clip_partial_gradients_norm(self) -> None:
# test partial clipping
param_1 = Variable(torch.tensor([1.0, 2.0]), requires_grad=True)
keyed_optimizer = DummyKeyedOptimizer(
{"param_1": param_1}, {}, [{"params": [param_1]}]
)
gradient_clipping_optimizer = GradientClippingOptimizer(
optimizer=keyed_optimizer, max_gradient=1.0, clipping=GradientClipping.NORM
)
gradient_clipping_optimizer.zero_grad()
param_1.grad = torch.tensor([2.0, 4.0])
gradient_clipping_optimizer.step()
norm = 2.0 ** 2 + 4.0 ** 2
expected_grad = torch.tensor([2.0, 4.0]) * norm ** (-0.5)
self.assertTrue(torch.allclose(param_1.grad, expected_grad))
def test_clip_partial_gradients_norm_multi_params(self) -> None:
# test partial clipping
max_gradient = 2.0
param_1 = Variable(torch.tensor([1.0, 2.0]), requires_grad=True)
param_2 = Variable(torch.tensor([2.0, 4.0]), requires_grad=True)
keyed_optimizer = DummyKeyedOptimizer(
{"param_1": param_1, "param_2": param_2},
{},
[{"params": [param_1]}, {"params": [param_2]}],
)
gradient_clipping_optimizer = GradientClippingOptimizer(
optimizer=keyed_optimizer,
max_gradient=max_gradient,
clipping=GradientClipping.NORM,
)
gradient_clipping_optimizer.zero_grad()
param_1.grad = torch.tensor([2.0, 4.0])
param_2.grad = torch.tensor([4.0, 8.0])
gradient_clipping_optimizer.step()
print(param_1.grad, param_2.grad)
norm = (2.0 ** 2 + 4.0 ** 2 + 4.0 ** 2 + 8.0 ** 2) ** (-0.5)
expected_grad_1 = torch.tensor([2.0, 4.0]) * norm * max_gradient
expected_grad_2 = torch.tensor([4.0, 8.0]) * norm * max_gradient
print(param_1.grad, param_2.grad, expected_grad_1, expected_grad_2)
self.assertTrue(torch.allclose(param_1.grad, expected_grad_1))
self.assertTrue(torch.allclose(param_2.grad, expected_grad_2))
def test_clip_all_gradients_value(self) -> None:
# Clip all gradients to zero
param_1 = Variable(torch.tensor([1.0, 2.0]), requires_grad=True)
keyed_optimizer = DummyKeyedOptimizer(
{"param_1": param_1}, {}, [{"params": [param_1]}]
)
gradient_clipping_optimizer = GradientClippingOptimizer(
optimizer=keyed_optimizer, max_gradient=0, clipping=GradientClipping.VALUE
)
gradient_clipping_optimizer.zero_grad()
param_1.grad = torch.tensor([1.0, 2.0])
gradient_clipping_optimizer.step()
self.assertTrue(torch.equal(param_1.grad, torch.tensor([0.0, 0.0])))
def test_clip_no_gradients_value(self) -> None:
# gradients are too small to be clipped
param_1 = Variable(torch.tensor([1.0, 2.0]), requires_grad=True)
keyed_optimizer = DummyKeyedOptimizer(
{"param_1": param_1}, {}, [{"params": [param_1]}]
)
gradient_clipping_optimizer = GradientClippingOptimizer(
optimizer=keyed_optimizer, max_gradient=1.0, clipping=GradientClipping.VALUE
)
gradient_clipping_optimizer.zero_grad()
param_1.grad = torch.tensor([0.5, 0.5])
gradient_clipping_optimizer.step()
self.assertTrue(torch.equal(param_1.grad, torch.tensor([0.5, 0.5])))
def test_clip_gradients_value(self) -> None:
# test partial clipping
param_1 = Variable(torch.tensor([1.0, 2.0]), requires_grad=True)
keyed_optimizer = DummyKeyedOptimizer(
{"param_1": param_1}, {}, [{"params": [param_1]}]
)
gradient_clipping_optimizer = GradientClippingOptimizer(
optimizer=keyed_optimizer, max_gradient=1, clipping=GradientClipping.VALUE
)
gradient_clipping_optimizer.zero_grad()
param_1.grad = torch.tensor([2.0, 4.0])
gradient_clipping_optimizer.step()
expected_grad = torch.tensor([1.0, 1.0])
self.assertTrue(torch.allclose(param_1.grad, expected_grad))
def test_clip_partial_gradients_value_multi_params(self) -> None:
# test partial clipping
max_gradient = 2.0
param_1 = Variable(torch.tensor([1.0, 2.0]), requires_grad=True)
param_2 = Variable(torch.tensor([2.0, 4.0]), requires_grad=True)
keyed_optimizer = DummyKeyedOptimizer(
{"param_1": param_1, "param_2": param_2},
{},
[{"params": [param_1]}, {"params": [param_2]}],
)
gradient_clipping_optimizer = GradientClippingOptimizer(
optimizer=keyed_optimizer,
max_gradient=max_gradient,
clipping=GradientClipping.VALUE,
)
gradient_clipping_optimizer.zero_grad()
param_1.grad = torch.tensor([2.0, 4.0])
param_2.grad = torch.tensor([4.0, 8.0])
gradient_clipping_optimizer.step()
expected_grad_1 = torch.tensor([2.0, 2.0])
expected_grad_2 = torch.tensor([2.0, 2.0])
self.assertTrue(torch.allclose(param_1.grad, expected_grad_1))
self.assertTrue(torch.allclose(param_2.grad, expected_grad_2))
| 35.873096 | 88 | 0.648224 | 875 | 7,067 | 4.984 | 0.100571 | 0.068792 | 0.137583 | 0.041275 | 0.885118 | 0.862188 | 0.849576 | 0.812199 | 0.812199 | 0.812199 | 0 | 0.042518 | 0.231216 | 7,067 | 196 | 89 | 36.056122 | 0.760169 | 0.061837 | 0 | 0.640625 | 0 | 0 | 0.019649 | 0 | 0 | 0 | 0 | 0 | 0.078125 | 1 | 0.0625 | false | 0 | 0.039063 | 0 | 0.109375 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cb956af2b011ff57243c1dc4faef1854291c8763 | 3,586 | py | Python | tests/_test_hypothesis.py | Skelmis/Discord-Anti-Spam | 6b0db01cb6363f84d729c240ea4b9679d509222e | [
"MIT"
] | 17 | 2021-11-22T07:29:07.000Z | 2022-03-23T12:09:40.000Z | tests/_test_hypothesis.py | Skelmis/Discord-Anti-Spam | 6b0db01cb6363f84d729c240ea4b9679d509222e | [
"MIT"
] | 25 | 2021-11-17T20:19:22.000Z | 2022-03-30T09:05:35.000Z | tests/_test_hypothesis.py | Skelmis/Discord-Anti-Spam | 6b0db01cb6363f84d729c240ea4b9679d509222e | [
"MIT"
] | 2 | 2021-12-18T17:40:11.000Z | 2022-02-16T03:25:17.000Z | import pytest
from discord.ext import commands
from hypothesis import given
from hypothesis.strategies import datetimes, dictionaries, floats, lists, text
# noinspection PyUnresolvedReferences
from antispam import (
AntiSpamHandler,
GuildAddonNotFound,
GuildNotFound,
MemberAddonNotFound,
MemberNotFound,
Options,
PluginCache,
)
from antispam.dataclasses import Guild, Member # noqa
from .fixtures import MockClass, create_bot, create_handler, create_plugin_cache
"""A test file devoted to hypothesis.
These tests do not run on ci due to time
constraints however they are used for
better test argument coverage
"""
class TestHypoth:
@pytest.mark.asyncio
@given(arg=text())
async def test_set_member_data_text(self, arg):
"""Test the cache sets member addon's correct using text"""
plugin_cache = PluginCache(AntiSpamHandler(commands.Bot("!")), MockClass())
with pytest.raises(GuildNotFound):
await plugin_cache.get_member_data(1, 1)
await plugin_cache.set_member_data(1, 1, arg)
assert await plugin_cache.get_member_data(1, 1) == arg
@pytest.mark.asyncio
@given(arg=dictionaries(text(), floats()))
async def test_set_member_data_dictionaries(self, arg):
"""Test the cache sets member addon's correct using dictionaries"""
plugin_cache = PluginCache(AntiSpamHandler(commands.Bot("!")), MockClass())
with pytest.raises(GuildNotFound):
await plugin_cache.get_member_data(1, 1)
await plugin_cache.set_member_data(1, 1, arg)
assert await plugin_cache.get_member_data(1, 1) == arg
@pytest.mark.asyncio
@given(arg=lists(datetimes()))
async def test_set_member_data_dictionaries(self, arg):
"""Test the cache sets member addon's correct using lists of datetimes"""
plugin_cache = PluginCache(AntiSpamHandler(commands.Bot("!")), MockClass())
with pytest.raises(GuildNotFound):
await plugin_cache.get_member_data(1, 1)
await plugin_cache.set_member_data(1, 1, arg)
assert await plugin_cache.get_member_data(1, 1) == arg
@pytest.mark.asyncio
@given(arg=text())
async def test_set_guild_data_text(self, arg):
"""Test the cache sets guild addon's correct using text"""
plugin_cache = PluginCache(AntiSpamHandler(commands.Bot("!")), MockClass())
with pytest.raises(GuildNotFound):
await plugin_cache.get_guild_data(1)
await plugin_cache.set_guild_data(1, arg)
assert await plugin_cache.get_guild_data(1) == arg
@pytest.mark.asyncio
@given(arg=dictionaries(text(), floats()))
async def test_set_guild_data_dictionaries(self, arg):
"""Test the cache sets guild addon's correct using dictionaries"""
plugin_cache = PluginCache(AntiSpamHandler(commands.Bot("!")), MockClass())
with pytest.raises(GuildNotFound):
await plugin_cache.get_guild_data(1)
await plugin_cache.set_guild_data(1, arg)
assert await plugin_cache.get_guild_data(1) == arg
@pytest.mark.asyncio
@given(arg=lists(datetimes()))
async def test_set_guild_data_dictionaries(self, arg):
"""Test the cache sets guild addon's correct using lists of datetimes"""
plugin_cache = PluginCache(AntiSpamHandler(commands.Bot("!")), MockClass())
with pytest.raises(GuildNotFound):
await plugin_cache.get_guild_data(1)
await plugin_cache.set_guild_data(1, arg)
assert await plugin_cache.get_guild_data(1) == arg
| 34.152381 | 83 | 0.70106 | 457 | 3,586 | 5.308534 | 0.177243 | 0.113355 | 0.118714 | 0.093982 | 0.784831 | 0.784831 | 0.780297 | 0.780297 | 0.773702 | 0.773702 | 0 | 0.009414 | 0.200223 | 3,586 | 104 | 84 | 34.480769 | 0.836471 | 0.011154 | 0 | 0.71875 | 0 | 0 | 0.002001 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 1 | 0 | false | 0 | 0.109375 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cb9e07b037118bb3e5e15c46a26415b177b22dbc | 30 | py | Python | nbunicorn/__init__.py | nbunicorn/nbunicorn | abd4ac988efeec90997fae6880ddb2e7da804f4c | [
"MIT"
] | null | null | null | nbunicorn/__init__.py | nbunicorn/nbunicorn | abd4ac988efeec90997fae6880ddb2e7da804f4c | [
"MIT"
] | null | null | null | nbunicorn/__init__.py | nbunicorn/nbunicorn | abd4ac988efeec90997fae6880ddb2e7da804f4c | [
"MIT"
] | null | null | null | def expose(func):
return func | 15 | 17 | 0.766667 | 5 | 30 | 4.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 2 | 18 | 15 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
cbe33c895ea29de6bb5e7c1dbc374d8493f9d248 | 10,460 | py | Python | tests/cases/resources/tests/field.py | rysdyk/serrano | 926d874b19efdd18e359d32bca601058b655b288 | [
"BSD-2-Clause"
] | null | null | null | tests/cases/resources/tests/field.py | rysdyk/serrano | 926d874b19efdd18e359d32bca601058b655b288 | [
"BSD-2-Clause"
] | null | null | null | tests/cases/resources/tests/field.py | rysdyk/serrano | 926d874b19efdd18e359d32bca601058b655b288 | [
"BSD-2-Clause"
] | 1 | 2020-01-16T15:26:37.000Z | 2020-01-16T15:26:37.000Z | import json
from django.test.utils import override_settings
from avocado.models import DataField
from avocado.events.models import Log
from .base import BaseTestCase
from tests.models import Project, Title
class FieldResourceTestCase(BaseTestCase):
def test_get_all(self):
response = self.client.get('/api/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 5)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=True)
def test_get_all_orphan(self):
# Orphan one of the fields we are about to retrieve
DataField.objects.filter(pk=2).update(field_name="XXX")
response = self.client.get('/api/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 4)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=False)
def test_get_all_orphan_check_off(self):
# Orphan one of the fields we are about to retrieve
DataField.objects.filter(pk=2).update(field_name="XXX")
response = self.client.get('/api/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 5)
def test_get_one(self):
# Not allowed to see
response = self.client.get('/api/fields/1/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 404)
response = self.client.get('/api/fields/2/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertTrue(json.loads(response.content))
self.assertTrue(Log.objects.filter(event='read', object_id=2).exists())
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=True)
def test_get_one_orphan(self):
# Orphan the field before we retrieve it
DataField.objects.filter(pk=2).update(model_name="XXX")
response = self.client.get('/api/fields/2/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 500)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=False)
def test_get_one_orphan_check_off(self):
# Orphan one of the fields we are about to retrieve
DataField.objects.filter(pk=2).update(field_name="XXX")
response = self.client.get('/api/fields/2/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
def test_get_privileged(self):
# Superuser sees everything
self.client.login(username='root', password='password')
response = self.client.get('/api/fields/?unpublished=1',
HTTP_ACCEPT='application/json')
self.assertEqual(len(json.loads(response.content)), 12)
response = self.client.get('/api/fields/1/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertTrue(json.loads(response.content))
def test_values(self):
# title.name
response = self.client.get('/api/fields/2/values/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertTrue(json.loads(response.content)['values'])
def test_values_no_limit(self):
# title.name
response = self.client.get('/api/fields/2/values/?limit=0',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
data = json.loads(response.content)
self.assertTrue(data['values'])
self.assertFalse('previous' in data['_links'])
self.assertFalse('next' in data['_links'])
def test_values_random(self):
# Random values
response = self.client.get('/api/fields/2/values/?random=3',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(len(json.loads(response.content)), 3)
def test_values_query(self):
# Query values
response = self.client.get('/api/fields/2/values/?query=a',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content)['values'], [
{'label': 'Analyst', 'value': 'Analyst'},
{'label': 'Guard', 'value': 'Guard'},
{'label': 'Lawyer', 'value': 'Lawyer'},
{'label': 'Programmer', 'value': 'Programmer'},
{'label': 'QA', 'value': 'QA'},
])
message = Log.objects.get(event='values', object_id=2)
self.assertEqual(message.data['query'], 'a')
def test_values_validate(self):
# Valid, single dict
response = self.client.post('/api/fields/2/values/',
data=json.dumps({'value': 'IT'}),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
content = json.loads(response.content)
self.assertEqual(content, {
'value': 'IT',
'label': 'IT',
'valid': True,
})
message = Log.objects.get(event='validate', object_id=2)
self.assertEqual(message.data['count'], 1)
# Invalid
response = self.client.post('/api/fields/2/values/',
data=json.dumps({'value': 'Bartender'}),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
content = json.loads(response.content)
self.assertEqual(content, {
'value': 'Bartender',
'label': 'Bartender',
'valid': False,
})
# Mixed, list
response = self.client.post('/api/fields/2/values/',
data=json.dumps([
{'value': 'IT'},
{'value': 'Bartender'},
{'value': 'Programmer'}
]),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
content = json.loads(response.content)
self.assertEqual(content, [
{'value': 'IT', 'label': 'IT', 'valid': True},
{'value': 'Bartender', 'label': 'Bartender', 'valid': False},
{'value': 'Programmer', 'label': 'Programmer', 'valid': True},
])
# Error - no value
response = self.client.post('/api/fields/2/values/',
data=json.dumps({}),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 422)
# Error - type
response = self.client.post('/api/fields/2/values/',
data=json.dumps(None),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 422)
def test_labels_validate(self):
# Valid, single dict
response = self.client.post('/api/fields/2/values/',
data=json.dumps({'label': 'IT'}),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
content = json.loads(response.content)
self.assertEqual(content, {
'value': 'IT',
'label': 'IT',
'valid': True,
})
def test_mixed_validate(self):
response = self.client.post('/api/fields/2/values/',
data=json.dumps([
{'label': 'IT'},
{'label': 'Bartender'},
{'value': 'Programmer'}
]),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
content = json.loads(response.content)
self.assertEqual(content, [
{'value': 'IT', 'label': 'IT', 'valid': True},
{'value': 'Bartender', 'label': 'Bartender', 'valid': False},
{'value': 'Programmer', 'label': 'Programmer', 'valid': True},
])
def test_stats(self):
# title.name
response = self.client.get('/api/fields/2/stats/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertTrue(json.loads(response.content))
self.assertTrue(Log.objects.filter(event='stats', object_id=2).exists())
# title.salary
response = self.client.get('/api/fields/3/stats/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertTrue(json.loads(response.content))
self.assertTrue(Log.objects.filter(event='stats', object_id=3).exists())
# project.due_date
response = self.client.get('/api/fields/11/stats/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
stats = json.loads(response.content)
self.assertTrue(stats)
self.assertTrue(Log.objects.filter(event='stats', object_id=11).exists())
self.assertEqual(stats['min'], '2000-01-01')
self.assertEqual(stats['max'], '2010-01-01')
def test_empty_stats(self):
Title.objects.all().delete()
response = self.client.get('/api/fields/2/stats/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertTrue(json.loads(response.content))
self.assertTrue(Log.objects.filter(event='stats', object_id=2).exists())
def test_dist(self):
# title.salary
response = self.client.get('/api/fields/3/dist/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content), {
u'size': 4,
u'clustered': False,
u'outliers': [],
u'data': [{
u'count': 3,
u'values': [15000]
}, {
u'count': 1,
u'values': [10000]
}, {
u'count': 1,
u'values': [20000]
}, {
u'count': 1,
u'values': [200000]
}],
})
self.assertTrue(Log.objects.filter(event='dist', object_id=3).exists())
| 39.029851 | 81 | 0.597992 | 1,158 | 10,460 | 5.291019 | 0.126943 | 0.097927 | 0.073445 | 0.102008 | 0.815734 | 0.794843 | 0.756488 | 0.733964 | 0.733964 | 0.698384 | 0 | 0.020466 | 0.257266 | 10,460 | 267 | 82 | 39.17603 | 0.768181 | 0.040535 | 0 | 0.598131 | 0 | 0 | 0.16665 | 0.030255 | 0 | 0 | 0 | 0 | 0.261682 | 1 | 0.079439 | false | 0.004673 | 0.028037 | 0 | 0.11215 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cbe59159b6c88063e6a3e166c1a8be81cf0cb012 | 66 | py | Python | neuromodels/solvers/__init__.py | nicolossus/neuromodels | 82f95a8670116ef26b71c02f9c94626c502bc989 | [
"MIT"
] | null | null | null | neuromodels/solvers/__init__.py | nicolossus/neuromodels | 82f95a8670116ef26b71c02f9c94626c502bc989 | [
"MIT"
] | null | null | null | neuromodels/solvers/__init__.py | nicolossus/neuromodels | 82f95a8670116ef26b71c02f9c94626c502bc989 | [
"MIT"
] | null | null | null | from .brunel_solver import *
from .hodgkin_huxley_solver import *
| 22 | 36 | 0.818182 | 9 | 66 | 5.666667 | 0.666667 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 66 | 2 | 37 | 33 | 0.87931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
380cd10be942b329ced897627a8ce172da476c4c | 46 | py | Python | app/webapp.py | edgestats/edgestats-webui | fcb7e3ef5e347df50530a28fa128e947999c9d52 | [
"MIT"
] | 1 | 2021-12-10T20:03:29.000Z | 2021-12-10T20:03:29.000Z | app/webapp.py | edgestats/edgestats-webui | fcb7e3ef5e347df50530a28fa128e947999c9d52 | [
"MIT"
] | null | null | null | app/webapp.py | edgestats/edgestats-webui | fcb7e3ef5e347df50530a28fa128e947999c9d52 | [
"MIT"
] | null | null | null | # Entry point for the webapp
from . import app | 23 | 28 | 0.76087 | 8 | 46 | 4.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195652 | 46 | 2 | 29 | 23 | 0.945946 | 0.565217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
69e6105230b3a94af38b7e82cee71c8336915fd3 | 86 | py | Python | s_drv_textio.py | EnergitCZ/NelecticAL | 477dacd6d4b8416e9e0b069fe7efcb65f54f2499 | [
"MIT"
] | null | null | null | s_drv_textio.py | EnergitCZ/NelecticAL | 477dacd6d4b8416e9e0b069fe7efcb65f54f2499 | [
"MIT"
] | null | null | null | s_drv_textio.py | EnergitCZ/NelecticAL | 477dacd6d4b8416e9e0b069fe7efcb65f54f2499 | [
"MIT"
] | null | null | null | def printl(text): #Printing
print(text)
def getl(text): #Getting
return input(text) | 17.2 | 27 | 0.732558 | 13 | 86 | 4.846154 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127907 | 86 | 5 | 28 | 17.2 | 0.84 | 0.174419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.25 | 0.75 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 6 |
69eccc6521698ee4bbaeea785f8abccb44562ee5 | 6,714 | py | Python | tests/test_cursor_fetchmany.py | adh/ctds | 8c8b562341fb9635e3d89013ff06ffc6b1397abb | [
"MIT"
] | 78 | 2016-03-14T18:02:05.000Z | 2021-11-26T23:23:06.000Z | tests/test_cursor_fetchmany.py | adh/ctds | 8c8b562341fb9635e3d89013ff06ffc6b1397abb | [
"MIT"
] | 64 | 2016-10-18T17:54:08.000Z | 2021-09-30T11:01:02.000Z | tests/test_cursor_fetchmany.py | adh/ctds | 8c8b562341fb9635e3d89013ff06ffc6b1397abb | [
"MIT"
] | 17 | 2016-07-21T20:22:12.000Z | 2020-11-07T01:25:26.000Z | import ctds
from .base import TestExternalDatabase
class TestCursorFetchMany(TestExternalDatabase):
'''Unit tests related to the Cursor.fetchmany() method.
'''
def test___doc__(self):
self.assertEqual(
ctds.Cursor.fetchmany.__doc__,
'''\
fetchmany(size=self.arraysize)
Fetch the next set of rows of a query result, returning a sequence of
sequences. An empty sequence is returned when no more rows are available.
:pep:`0249#fetchmany`
:return: A sequence of result rows.
:rtype: ctds.RowList
'''
)
def test_closed(self):
with self.connect() as connection:
cursor = connection.cursor()
cursor.close()
try:
cursor.fetchmany()
except ctds.InterfaceError as ex:
self.assertEqual(str(ex), 'cursor closed')
else:
self.fail('.fetchmany() did not fail as expected') # pragma: nocover
def test_closed_connection(self): # pylint: disable=invalid-name
connection = self.connect()
with connection.cursor() as cursor:
connection.close()
try:
cursor.fetchmany()
except ctds.InterfaceError as ex:
self.assertEqual(str(ex), 'connection closed')
else:
self.fail('.fetchmany() did not fail as expected') # pragma: nocover
def test_invalid_size(self):
with self.connect() as connection:
with connection.cursor() as cursor:
self.assertRaises(TypeError, cursor.fetchmany, size='123')
def test_premature(self):
with self.connect() as connection:
with connection.cursor() as cursor:
self.assertRaises(ctds.InterfaceError, cursor.fetchmany)
def test_fetchmany(self):
with self.connect() as connection:
with connection.cursor() as cursor:
cursor.execute(
'''
DECLARE @{0} TABLE(i INT);
INSERT INTO @{0}(i) VALUES (1),(2),(3);
SELECT * FROM @{0};
SELECT i * 2 FROM @{0};
'''.format(self.test_fetchmany.__name__)
)
self.assertEqual([tuple(row) for row in cursor.fetchmany()], [(1,)])
self.assertEqual([tuple(row) for row in cursor.fetchmany()], [(2,)])
self.assertEqual([tuple(row) for row in cursor.fetchmany()], [(3,)])
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), True)
self.assertEqual([tuple(row) for row in cursor.fetchmany()], [(2,)])
self.assertEqual([tuple(row) for row in cursor.fetchmany()], [(4,)])
self.assertEqual([tuple(row) for row in cursor.fetchmany()], [(6,)])
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), None)
self.assertRaises(ctds.InterfaceError, cursor.fetchmany)
cursor.arraysize = 3
cursor.execute(
'''
DECLARE @{0} TABLE(i INT);
INSERT INTO @{0}(i) VALUES (1),(2),(3);
SELECT * FROM @{0};
SELECT i * 2 FROM @{0};
'''.format(self.test_fetchmany.__name__)
)
self.assertEqual([tuple(row) for row in cursor.fetchmany(3)], [(1,), (2,), (3,)])
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), True)
self.assertEqual([tuple(row) for row in cursor.fetchmany(3)], [(2,), (4,), (6,)])
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), None)
self.assertRaises(ctds.InterfaceError, cursor.fetchmany)
def test_size(self):
with self.connect() as connection:
with connection.cursor() as cursor:
cursor.execute(
'''
DECLARE @{0} TABLE(i INT);
INSERT INTO @{0}(i) VALUES (1),(2),(3);
SELECT * FROM @{0};
SELECT i * 2 FROM @{0};
'''.format(self.test_size.__name__)
)
self.assertEqual([tuple(row) for row in cursor.fetchmany(3)], [(1,), (2,), (3,)])
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), True)
self.assertEqual([tuple(row) for row in cursor.fetchmany(3)], [(2,), (4,), (6,)])
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), None)
self.assertRaises(ctds.InterfaceError, cursor.fetchmany)
def test_empty_resultset(self):
with self.connect() as connection:
with connection.cursor() as cursor:
cursor.execute(
'''
DECLARE @{0} TABLE(i INT);
INSERT INTO @{0}(i) VALUES (1),(2),(3);
SELECT i FROM @{0} WHERE i < 0;
'''.format(self.test_empty_resultset.__name__)
)
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), None)
def test_multiple_resultsets(self):
with self.connect() as connection:
with connection.cursor() as cursor:
cursor.execute(
'''
DECLARE @{0} TABLE(i INT);
INSERT INTO @{0}(i) VALUES (1),(2),(3);
SELECT i FROM @{0} WHERE i < 0;
SELECT i AS j FROM @{0} WHERE i > 2;
SELECT i AS k FROM @{0} WHERE i > 3;
SELECT i AS ii FROM @{0};
'''.format(self.test_multiple_resultsets.__name__)
)
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), True)
self.assertEqual([tuple(row) for row in cursor.fetchmany(3)], [(3,)])
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), True)
self.assertEqual(list(cursor.fetchmany()), [])
self.assertEqual(cursor.nextset(), True)
self.assertEqual([tuple(row) for row in cursor.fetchmany(3)], [(1,), (2,), (3,)])
self.assertEqual(cursor.nextset(), None)
| 44.76 | 97 | 0.513852 | 678 | 6,714 | 5.017699 | 0.153392 | 0.15873 | 0.070547 | 0.081129 | 0.783069 | 0.764256 | 0.755144 | 0.755144 | 0.743386 | 0.729865 | 0 | 0.018266 | 0.355824 | 6,714 | 149 | 98 | 45.060403 | 0.768324 | 0.017724 | 0 | 0.636364 | 0 | 0 | 0.02118 | 0 | 0 | 0 | 0 | 0 | 0.414141 | 1 | 0.090909 | false | 0 | 0.020202 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
69f39949e310725092ca308cf321373d2ebb7fe4 | 45 | py | Python | multitasking_transformers/dataloaders/__init__.py | NLPatVCU/multitasking_transformers | 3245518a6cb3748916214233ce77965384df72f9 | [
"MIT"
] | 19 | 2020-09-22T08:26:23.000Z | 2022-03-29T03:06:56.000Z | multitasking_transformers/dataloaders/__init__.py | NLPatVCU/multitasking_transformers | 3245518a6cb3748916214233ce77965384df72f9 | [
"MIT"
] | 2 | 2020-06-08T21:27:31.000Z | 2020-06-19T18:00:19.000Z | multitasking_transformers/dataloaders/__init__.py | AndriyMulyar/multitasking_transformers | 3245518a6cb3748916214233ce77965384df72f9 | [
"MIT"
] | 5 | 2020-04-03T23:33:01.000Z | 2020-07-02T05:42:46.000Z | from .round_robin import RoundRobinDataLoader | 45 | 45 | 0.911111 | 5 | 45 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 45 | 1 | 45 | 45 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38541941fdb6d7c2864cb624f6df2afefc95fdcd | 27 | py | Python | itatools/__init__.py | quantop-dungeon/Itaw | ea2c5250fda2ab000a8081af32f7d947c345210a | [
"MIT"
] | null | null | null | itatools/__init__.py | quantop-dungeon/Itaw | ea2c5250fda2ab000a8081af32f7d947c345210a | [
"MIT"
] | null | null | null | itatools/__init__.py | quantop-dungeon/Itaw | ea2c5250fda2ab000a8081af32f7d947c345210a | [
"MIT"
] | null | null | null | from itatools.itaw import * | 27 | 27 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38839fd34878b0b036838f77e63da750218dc45d | 4,434 | py | Python | modules/augment/misc/python/test/test_augmenter_images.py | FadyEssam/opencv_contrib | 8ebe2629ec7ae17338f6dc7acceada82151185ed | [
"BSD-3-Clause"
] | null | null | null | modules/augment/misc/python/test/test_augmenter_images.py | FadyEssam/opencv_contrib | 8ebe2629ec7ae17338f6dc7acceada82151185ed | [
"BSD-3-Clause"
] | 1 | 2019-07-11T20:21:36.000Z | 2019-07-11T20:21:36.000Z | modules/augment/misc/python/test/test_augmenter_images.py | FadyEssam/opencv_contrib | 8ebe2629ec7ae17338f6dc7acceada82151185ed | [
"BSD-3-Clause"
] | null | null | null | import cv2 as cv
import numpy as np
from tests_common import NewOpenCVTests
from config import MIN_NUMBER_OF_TESTS, MAX_NUMBER_OF_TESTS, MIN_IMAGE_DIM_SIZE, MAX_IMAGE_DIM_SIZE, MIN_NUMBER_OF_GROUND_TRUTH_DATA, MAX_NUMBER_OF_GROUND_TRUTH_DATA
## for consistency
np.random.seed(seed=1)
cv.setRNGSeed(seed=1)
class augmenter_test(NewOpenCVTests):
def test_augmenter_images(self):
numberOfImages = np.random.randint(MIN_NUMBER_OF_TESTS, MAX_NUMBER_OF_TESTS)
aug = cv.augment_Augmenter()
aug.add(cv.augment_FlipHorizontal(), prob=0.7)
aug.add(cv.augment_FlipVertical(), prob=0.5)
aug.add(cv.augment_GaussianBlur(kernelSize=5, sigma=12), prob=0.7)
aug.add(cv.augment_Rotate(minAngle=0, maxAngle=180), prob=0.3)
aug.add(cv.augment_Resize(size=(1200, 900)), prob=0.4)
imgs = []
for i in range(numberOfImages):
widthOfImages = np.random.randint(MIN_IMAGE_DIM_SIZE, MAX_IMAGE_DIM_SIZE)
heightOfImages = np.random.randint(MIN_IMAGE_DIM_SIZE, MAX_IMAGE_DIM_SIZE)
img = np.random.rand(heightOfImages, widthOfImages)
imgs.append(img)
imgs = imgs
aug.applyImages(imgs)
def test_augmenter_images_with_masks(self):
numberOfImages = np.random.randint(MIN_NUMBER_OF_TESTS, MAX_NUMBER_OF_TESTS)
aug = cv.augment_Augmenter()
aug.add(cv.augment_FlipHorizontal(), prob=0.7)
aug.add(cv.augment_FlipVertical(), prob=0.5)
aug.add(cv.augment_GaussianBlur(kernelSize=5, sigma=12), prob=0.7)
aug.add(cv.augment_Rotate(minAngle=0, maxAngle=180), prob=0.3)
aug.add(cv.augment_Resize(size=(1200, 900)), prob=0.4)
imgs = []
masks = []
for i in range(numberOfImages):
widthOfImages = np.random.randint(MIN_IMAGE_DIM_SIZE, MAX_IMAGE_DIM_SIZE)
heightOfImages = np.random.randint(MIN_IMAGE_DIM_SIZE, MAX_IMAGE_DIM_SIZE)
img = np.random.rand(heightOfImages, widthOfImages)
imgs.append(img)
mask = np.random.rand(heightOfImages, widthOfImages)
masks.append(mask)
aug.applyImagesWithMasks(imgs, masks)
def test_augmenter_images_with_points(self):
numberOfImages = np.random.randint(MIN_NUMBER_OF_TESTS, MAX_NUMBER_OF_TESTS)
aug = cv.augment_Augmenter()
aug.add(cv.augment_FlipHorizontal(), prob=0.7)
aug.add(cv.augment_FlipVertical(), prob=0.5)
aug.add(cv.augment_GaussianBlur(kernelSize=5, sigma=12), prob=0.7)
aug.add(cv.augment_Rotate(minAngle=0, maxAngle=180), prob=0.3)
aug.add(cv.augment_Resize(size=(1200, 900)), prob=0.4)
imgs = []
pointsArr = []
for i in range(numberOfImages):
widthOfImages = np.random.randint(MIN_IMAGE_DIM_SIZE, MAX_IMAGE_DIM_SIZE)
heightOfImages = np.random.randint(MIN_IMAGE_DIM_SIZE, MAX_IMAGE_DIM_SIZE)
numberOfPoints = np.random.randint(MIN_NUMBER_OF_GROUND_TRUTH_DATA, MAX_NUMBER_OF_GROUND_TRUTH_DATA)
img = np.random.rand(heightOfImages, widthOfImages)
imgs.append(img)
points = np.random.rand(numberOfPoints, 2)
pointsArr.append(points)
aug.applyImagesWithPoints(imgs, pointsArr)
def test_augmenter_images_with_rectangles(self):
numberOfImages = np.random.randint(MIN_NUMBER_OF_TESTS, MAX_NUMBER_OF_TESTS)
aug = cv.augment_Augmenter()
aug.add(cv.augment_FlipHorizontal(), prob=0.7)
aug.add(cv.augment_FlipVertical(), prob=0.5)
aug.add(cv.augment_GaussianBlur(kernelSize=5, sigma=12), prob=0.7)
aug.add(cv.augment_Rotate(minAngle=0, maxAngle=180), prob=0.3)
aug.add(cv.augment_Resize(size=(1200, 900)), prob=0.4)
imgs = []
rectsArr = []
for i in range(numberOfImages):
widthOfImages = np.random.randint(MIN_IMAGE_DIM_SIZE, MAX_IMAGE_DIM_SIZE)
heightOfImages = np.random.randint(MIN_IMAGE_DIM_SIZE, MAX_IMAGE_DIM_SIZE)
numberOfRects = np.random.randint(MIN_NUMBER_OF_GROUND_TRUTH_DATA, MAX_NUMBER_OF_GROUND_TRUTH_DATA)
img = np.random.rand(heightOfImages, widthOfImages)
imgs.append(img)
rects = np.random.rand(numberOfRects, 4)
rectsArr.append(rects)
aug.applyImagesWithRectangles(imgs, rectsArr)
if __name__ == '__main__':
NewOpenCVTests.bootstrap() | 40.309091 | 165 | 0.684032 | 593 | 4,434 | 4.841484 | 0.143339 | 0.075235 | 0.05573 | 0.104493 | 0.811216 | 0.770463 | 0.770463 | 0.770463 | 0.748868 | 0.748868 | 0 | 0.028824 | 0.209743 | 4,434 | 110 | 166 | 40.309091 | 0.790525 | 0.003383 | 0 | 0.641975 | 0 | 0 | 0.001811 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049383 | false | 0 | 0.049383 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
38896645bafa3ce3e7c8278772328dc3499ef23d | 225 | py | Python | mopidy_alarm/native/time_printer.py | valentinb/mopidy-alarm | ef268ac0f6fc811fa72f7f69961074e45a299952 | [
"Apache-2.0"
] | 3 | 2015-05-22T00:01:08.000Z | 2018-03-15T07:26:13.000Z | mopidy_alarm/native/time_printer.py | valentinb/mopidy-alarm | ef268ac0f6fc811fa72f7f69961074e45a299952 | [
"Apache-2.0"
] | null | null | null | mopidy_alarm/native/time_printer.py | valentinb/mopidy-alarm | ef268ac0f6fc811fa72f7f69961074e45a299952 | [
"Apache-2.0"
] | null | null | null | from mopidy_alarm import time_printer_interface
import os
class TimePrinter(time_printer_interface.TimePrinterInterface):
def print_time(self, hours, minutes):
os.system("clear")
print(str(hours) + ':' + str(minutes))
| 25 | 63 | 0.777778 | 29 | 225 | 5.827586 | 0.655172 | 0.130178 | 0.236686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 225 | 8 | 64 | 28.125 | 0.845 | 0 | 0 | 0 | 0 | 0 | 0.026667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
38d8389cb3ac72a7cf1d8564dab79763a31150fc | 155 | py | Python | rcds/util/__init__.py | jordanbertasso/rcds | d3d655a59a350042d65476793db84e761de04829 | [
"BSD-3-Clause"
] | 5 | 2020-07-13T12:40:02.000Z | 2021-08-21T11:18:28.000Z | rcds/util/__init__.py | jordanbertasso/rcds | d3d655a59a350042d65476793db84e761de04829 | [
"BSD-3-Clause"
] | 144 | 2020-07-06T11:26:49.000Z | 2022-02-01T14:33:28.000Z | rcds/util/__init__.py | jordanbertasso/rcds | d3d655a59a350042d65476793db84e761de04829 | [
"BSD-3-Clause"
] | 7 | 2020-07-22T12:38:32.000Z | 2021-12-21T14:27:54.000Z | from .deep_merge import deep_merge # noqa: F401
from .find import find_files # noqa: F401
from .load import SUPPORTED_EXTENSIONS, load_any # noqa: F401
| 38.75 | 62 | 0.774194 | 24 | 155 | 4.791667 | 0.5 | 0.208696 | 0.208696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069231 | 0.16129 | 155 | 3 | 63 | 51.666667 | 0.815385 | 0.206452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2a4af30837ce48aeb7cf235e57297290fdc2e3e2 | 61 | py | Python | discord_build_info_py/handler/parse.py | Saulouk/discord | 067c74ed5e5774bcaf05b13dd8fd67b6804eabfe | [
"Apache-2.0"
] | 5 | 2020-09-25T01:01:08.000Z | 2021-12-19T19:05:53.000Z | discord_build_info_py/handler/parse.py | Saulouk/discord | 067c74ed5e5774bcaf05b13dd8fd67b6804eabfe | [
"Apache-2.0"
] | null | null | null | discord_build_info_py/handler/parse.py | Saulouk/discord | 067c74ed5e5774bcaf05b13dd8fd67b6804eabfe | [
"Apache-2.0"
] | 2 | 2022-02-09T07:44:19.000Z | 2022-02-28T08:26:01.000Z | import json
def reparse(data):
return json.loads(data)
| 10.166667 | 27 | 0.704918 | 9 | 61 | 4.777778 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196721 | 61 | 5 | 28 | 12.2 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
aaa25dfe39fe55e7ef055c8e64679a60def25c30 | 188 | py | Python | fletcher/__init__.py | jbrockmendel/fletcher | 99b8f12beefed4991960f316d75199de32c30b2a | [
"MIT"
] | null | null | null | fletcher/__init__.py | jbrockmendel/fletcher | 99b8f12beefed4991960f316d75199de32c30b2a | [
"MIT"
] | 51 | 2019-10-16T12:48:11.000Z | 2020-08-26T10:37:50.000Z | fletcher/__init__.py | krivonogov/fletcher | 00e6f233b3503b534afbb7767dd7667f4379794d | [
"MIT"
] | null | null | null | from .base import FletcherArray, FletcherDtype, pandas_from_arrow
from .string_array import TextAccessor
__all__ = ["FletcherArray", "FletcherDtype", "TextAccessor", "pandas_from_arrow"]
| 37.6 | 81 | 0.81383 | 20 | 188 | 7.2 | 0.55 | 0.361111 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090426 | 188 | 4 | 82 | 47 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.292553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aacf6f0055f89c4db253def8f2522a6a1d6ba89b | 9,439 | py | Python | data_structures_two/1_binary_heap.py | amanalok/python-dsa | 4b49032c3fd7c8236f1154a3d080fd8e1713d74f | [
"MIT"
] | null | null | null | data_structures_two/1_binary_heap.py | amanalok/python-dsa | 4b49032c3fd7c8236f1154a3d080fd8e1713d74f | [
"MIT"
] | null | null | null | data_structures_two/1_binary_heap.py | amanalok/python-dsa | 4b49032c3fd7c8236f1154a3d080fd8e1713d74f | [
"MIT"
] | null | null | null | import sys
class MinHeap:
def __init__(self, capacity):
self.storage = [None] * capacity
self.capacity = capacity
self.size = 0
def get_parent_index(self, index):
return (index - 1) // 2
def get_left_child_index(self, index):
return (index * 2) + 1
def get_right_child_index(self, index):
return (index * 2) + 2
def has_parent(self, index):
return self.get_parent_index(index) >= 0
def has_left_child(self, index):
return self.get_left_child_index(index) < size
def has_right_child(self, index):
return self.get_right_child_index(index) < size
def get_parent(self, index):
parent_index = self.get_parent_index(index)
return self.storage[parent_index]
def get_left_child(self, index):
left_child_index = self.get_left_child_index(index)
return self.storage[left_child_index]
def get_right_child(self, index):
right_child_index = self.get_right_child_index(index)
return self.storage[right_child_index]
def swap(self, index1, index2):
temp = self.storage[index1]
self.storage[index1] = self.storage[index2]
self.storage[index2] = temp
def is_full(self):
return self.capacity == self.size
def is_empty(self):
return self.size == 0
def add(self, data):
if self.is_full():
raise Exception('Min Binary Heap is full !!!')
self.storage[self.size] = data
self.size += 1
self.heapify_up()
def heapify_up(self):
index = self.size - 1
while(self.has_parent(index) and
self.get_parent(index) > self.storage[index]):
parent_index = self.get_parent_index(index)
self.swap(parent_index, index)
index = parent_index
def delete(self):
if self.is_empty():
raise Exception('Min Binary Heap is empty !!!')
temp = self.storage[0]
self.storage[0] = self.storage[self.size-1]
self.size -= 1
self.heapify_down()
return temp
def heapify_down(self):
index = 0
while(self.has_left_child()):
smaller_child_index = self.get_left_child_index(index)
if (self.has_right_child_index() and
self.get_right_child(index) < self.storage[smaller_child_index]):
smaller_child_index = self.get_right_child_index(index)
if self.storage[index] < self.storage[smaller_child_index]:
break
self.swap(index, smaller_child_index)
index = smaller_child_index
def recursive_add(self, data):
if self.is_full():
raise Exception('Min Binary Heap is full !!!')
index = self.size
self.storage[index] = data
self.size += 1
self.recursive_heapify_up(index)
def recursive_heapify_up(index):
if (self.has_parent(index) and
self.get_parent(index) > self.storage[index]):
parent_index = self.get_parent_index(index)
self.swap(index, parent_index)
self.recursive_heapify_up(parent_index)
def recursive_delete(self):
if self.is_empty():
raise Exception('Min Binary Heap is empty !!!')
index = 0
temp = self.storage[index]
self.storage[index] = self.storage[self.size-1]
self.size -= 1
self.recursive_heapify_down(index)
return temp
def recursive_heapify_down(index):
min_value_index = index
if (self.has_left_child(index)
and self.get_left_child(index) < self.storage[index]):
min_value_index = self.get_left_child_index(index)
if (self.has_right_child(index)
and self.right_child_index() < self.left_child_index()):
min_value_index = self.get_right_child_index(index)
if index != min_value_index:
self.swap(index, min_value_index)
self.recursive_heapify_down(min_value_index)
class MaxHeap:
def __init__(self, capacity):
self.storage = [None] * capacity
self.capacity = capacity
self.size = 0
def get_parent_index(self, index):
return (index - 1) // 2
def get_left_child_index(self, index):
return (index * 2) + 1
def get_right_child_index(self, index):
return (index * 2) + 2
def has_parent(self, index):
return self.get_parent_index(index) >= 0
def has_left_child(self, index):
return self.get_left_child_index(index) < self.size
def has_right_child(self, index):
return self.get_right_child_index(index) < self.size
def get_parent(self, index):
parent_index = self.get_parent_index(index)
return self.storage[parent_index]
def get_left_child(self, index):
left_child_index = self.get_left_child_index(index)
return self.storage[left_child_index]
def get_right_child(self, index):
right_child_index = self.get_right_child_index(index)
return self.storage[right_child_index]
def swap(self, index1, index2):
temp = self.storage[index1]
self.storage[index1] = self.storage[index2]
self.storage[index2] = temp
def is_full(self):
return self.size == self.capacity
def is_empty(self):
return self.size == 0
def add(self, data):
if self.is_full():
raise Exception('Max Binary Heap is full !!!')
self.storage[self.size] = data
self.size += 1
self.heapify_up()
def heapify_up(self):
index = self.size - 1
while (self.has_parent(index) and
self.get_parent(index) < self.storage[index]):
parent_index = self.get_parent_index(index)
self.swap(index, parent_index)
index = parent_index
def delete(self):
if self.is_empty():
raise Exception('Max Heap is empty !!!')
data = self.storage[0]
self.storage[0] = self.storage[self.size - 1]
self.size -= 1
self.heapify_down()
return data
def heapify_down(self):
index = 0
while self.has_left_child(index):
larger_child_index = self.get_left_child_index(index)
if(self.has_right_child(index) and
self.get_right_child(index) > self.get_left_child(index)):
larger_child_index = self.get_right_child_index(index)
if self.storage[index] > self.storage[larger_child_index]:
break
self.swap(index, larger_child_index)
index = larger_child_index
def recursive_add(self, data):
if self.is_full():
raise Exception('Max Binary Heap is full !!!')
self.storage[self.size] = data
self.size += 1
self.recursive_heapify_up(self.size - 1)
def recursive_heapify_up(self, index):
if (self.has_parent(index) and
self.get_parent(index) < self.storage[index]):
parent_index = self.get_parent_index(index)
self.swap(index, parent_index)
self.recursive_heapify_up(parent_index)
def recursive_delete(self):
if self.is_empty():
raise Exception('Max Binary Heap is empty !!!')
data = self.storage[0]
self.storage[0] = self.storage[self.size - 1]
self.size -= 1
self.recursive_heapify_down(0)
return data
def recursive_heapify_down(self, index):
larger_val_index = index
if (self.has_left_child(index) and
self.left_child_index(index) > self.storage[index]):
larger_val_index = self.get_left_child_index(index)
if (self.has_right_child(index) and
self.get_right_child(index) > self.get_left_child(index)):
larger_val_index = self.get_right_child_index(index)
if index != larger_val_index:
self.swap(index, larger_val_index)
self.recursive_heapify_down(larger_val_index)
def display(self):
print('Following are the elements in the Max Heap: ', end='')
print(self.storage[:self.size])
def max_val_pq_main():
max_pq = MaxHeap(10)
max_pq.add(20)
max_pq.add(11)
max_pq.add(29)
max_pq.add(17)
max_pq.add(81)
max_pq.display()
print('Dequeued elements sequence: ')
while(max_pq.size != 0):
print(max_pq.delete(), end=' ')
print('\n')
max_pq.recursive_add(12)
max_pq.recursive_add(1)
max_pq.recursive_add(14)
max_pq.recursive_add(7)
max_pq.display()
print('Dequeued (Recursive) elements sequence: ')
while(max_pq.size != 0):
print(max_pq.recursive_delete(), end= ' ')
print()
def min_val_pq_main():
min_pq = MinHeap(10)
min_pq.add(20)
min_pq.add(11)
min_pq.add(29)
min_pq.add(17)
min_pq.add(81)
min_pq.display()
print('Dequeued elements sequence: ')
while(min_pq.size != 0):
print(min_pq.delete(), end=' ')
print('\n')
min_pq.recursive_add(12)
min_pq.recursive_add(1)
min_pq.recursive_add(14)
min_pq.recursive_add(7)
min_pq.display()
print('Dequeued (Recursive) elements sequence: ')
while(min_pq.size != 0):
print(min_pq.recursive_dequeue(), end=' ')
print()
if __name__ == '__main__':
max_val_pq_main()
| 28.008902 | 81 | 0.619557 | 1,259 | 9,439 | 4.378078 | 0.059571 | 0.099782 | 0.055878 | 0.040094 | 0.839441 | 0.793723 | 0.767598 | 0.759615 | 0.736756 | 0.721517 | 0 | 0.013877 | 0.274711 | 9,439 | 336 | 82 | 28.092262 | 0.791265 | 0 | 0 | 0.639344 | 0 | 0 | 0.043331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184426 | false | 0 | 0.004098 | 0.065574 | 0.303279 | 0.057377 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aae23d15e329517d633883888fff6e9816ce2b15 | 15 | py | Python | Problem109.py | Cleancode404/ProjectEuler | 2f93b256b107bfb6a395b8aa197cfeacc599b00b | [
"MIT"
] | null | null | null | Problem109.py | Cleancode404/ProjectEuler | 2f93b256b107bfb6a395b8aa197cfeacc599b00b | [
"MIT"
] | null | null | null | Problem109.py | Cleancode404/ProjectEuler | 2f93b256b107bfb6a395b8aa197cfeacc599b00b | [
"MIT"
] | null | null | null | """
Darts
"""
| 3 | 5 | 0.333333 | 1 | 15 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 15 | 4 | 6 | 3.75 | 0.454545 | 0.333333 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2a9e4e25ec3d3bee70b92faf74f981e1e7ea3d51 | 31,387 | py | Python | training/loss_vc2.py | zhuxinqimac/stylegan2 | 5c3bda161ead21ea290de4190d3704e59cf6de64 | [
"BSD-Source-Code"
] | 5 | 2020-01-23T10:04:27.000Z | 2021-07-04T09:51:28.000Z | training/loss_vc2.py | zhuxinqimac/stylegan2 | 5c3bda161ead21ea290de4190d3704e59cf6de64 | [
"BSD-Source-Code"
] | null | null | null | training/loss_vc2.py | zhuxinqimac/stylegan2 | 5c3bda161ead21ea290de4190d3704e59cf6de64 | [
"BSD-Source-Code"
] | null | null | null | #!/usr/bin/python
#-*- coding: utf-8 -*-
# >.>.>.>.>.>.>.>.>.>.>.>.>.>.>.>.
# Licensed under the Apache License, Version 2.0 (the "License")
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# --- File Name: loss_vc2.py
# --- Creation Date: 24-04-2020
# --- Last Modified: Fri 09 Apr 2021 17:05:54 AEST
# --- Author: Xinqi Zhu
# .<.<.<.<.<.<.<.<.<.<.<.<.<.<.<.<
"""
Loss function in VC2.
"""
import numpy as np
import tensorflow as tf
import dnnlib.tflib as tflib
from dnnlib.tflib.autosummary import autosummary
from training.utils import get_return_v
def G_logistic_ns(G, D, opt, training_set, minibatch_size, DM=None, latent_type='uniform'):
_ = opt
# latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])
if latent_type == 'uniform':
latents = tf.random.uniform([minibatch_size, G.input_shapes[0][1]], minval=-2, maxval=2)
elif latent_type == 'normal':
latents = tf.random.normal([minibatch_size, G.input_shapes[0][1]])
elif latent_type == 'trunc_normal':
latents = tf.random.truncated_normal([minibatch_size, G.input_shapes[0][1]])
else:
raise ValueError('Latent type not supported: ' + latent_type)
labels = training_set.get_random_labels_tf(minibatch_size)
fake_images_out = get_return_v(G.get_output_for(latents, labels, is_training=True, return_atts=False), 1)
fake_scores_out = get_return_v(D.get_output_for(fake_images_out, labels, is_training=True), 1)
loss = tf.nn.softplus(-fake_scores_out) # -log(sigmoid(fake_scores_out))
return loss, None
def calc_z_w_reg(z_w):
reg = tf.reduce_mean(z_w * z_w)
return reg
def G_logistic_ns_regW(G, D, opt, training_set, minibatch_size, DM=None, latent_type='uniform', regW_lambda=1):
_ = opt
# latents = tf.random_normal([minibatch_size] + G.input_shapes[0][1:])
if latent_type == 'uniform':
latents = tf.random.uniform([minibatch_size, G.input_shapes[0][1]], minval=-2, maxval=2)
elif latent_type == 'normal':
latents = tf.random.normal([minibatch_size, G.input_shapes[0][1]])
elif latent_type == 'trunc_normal':
latents = tf.random.truncated_normal([minibatch_size, G.input_shapes[0][1]])
else:
raise ValueError('Latent type not supported: ' + latent_type)
labels = training_set.get_random_labels_tf(minibatch_size)
fake_images_out, _, z_w = get_return_v(G.get_output_for(latents, labels, is_training=True, return_atts=False), 3)
fake_scores_out = get_return_v(D.get_output_for(fake_images_out, labels, is_training=True), 1)
loss_z_w = calc_z_w_reg(z_w)
loss = tf.nn.softplus(-fake_scores_out) # -log(sigmoid(fake_scores_out))
loss += regW_lambda * loss_z_w
return loss, None
def calc_vc_loss(C_delta_latents, regress_out, D_global_size, C_global_size, D_lambda, C_lambda, delta_type):
assert regress_out.shape.as_list()[1] == (D_global_size + C_global_size)
# Continuous latents loss
if delta_type == 'onedim':
prob_C = tf.nn.softmax(regress_out[:, D_global_size:], axis=1)
I_loss_C = C_delta_latents * tf.log(prob_C + 1e-12)
I_loss_C = C_lambda * I_loss_C
I_loss_C = tf.reduce_sum(I_loss_C, axis=1)
I_loss = - I_loss_C
# Continuous latents loss
# I_loss_C = tf.nn.softmax_cross_entropy_with_logits_v2(C_delta_latents,
# regress_out, axis=1, name='delta_regress_loss')
# I_loss = C_lambda * I_loss_C
elif delta_type == 'fulldim':
I_loss_C = tf.reduce_sum((tf.nn.sigmoid(regress_out[:, D_global_size:]) - C_delta_latents) ** 2, axis=1)
I_loss = C_lambda * I_loss_C
return I_loss
def G_logistic_ns_vc2(G, D, I, opt, training_set, minibatch_size, DM, I_info=None, latent_type='uniform',
D_global_size=0, D_lambda=0, C_lambda=1, epsilon=0.4,
random_eps=False, delta_type='onedim', own_I=False):
_ = opt
discrete_latents = None
C_global_size = G.input_shapes[0][1]-D_global_size
if D_global_size > 0:
discrete_latents = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents = tf.one_hot(discrete_latents, D_global_size)
discrete_latents_2 = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents_2 = tf.one_hot(discrete_latents_2, D_global_size)
if latent_type == 'uniform':
latents = tf.random.uniform([minibatch_size] + [G.input_shapes[0][1]-D_global_size], minval=-2, maxval=2)
elif latent_type == 'normal':
latents = tf.random.normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
elif latent_type == 'trunc_normal':
latents = tf.random.truncated_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
else:
raise ValueError('Latent type not supported: ' + latent_type)
# Sample delta latents
if delta_type == 'onedim':
C_delta_latents = tf.random.uniform([minibatch_size], minval=0, maxval=C_global_size, dtype=tf.int32)
C_delta_latents = tf.cast(tf.one_hot(C_delta_latents, C_global_size), latents.dtype)
elif delta_type == 'fulldim':
C_delta_latents = tf.random.uniform([minibatch_size, C_global_size], minval=0, maxval=1.0, dtype=latents.dtype)
if delta_type == 'onedim':
if not random_eps:
delta_target = C_delta_latents * epsilon
else:
epsilon = epsilon * tf.random.normal([minibatch_size, 1], mean=0.0, stddev=2.0)
delta_target = C_delta_latents * epsilon
else:
delta_target = (C_delta_latents - 0.5) * epsilon
delta_latents = delta_target + latents
if D_global_size > 0:
latents = tf.concat([discrete_latents, latents], axis=1)
delta_latents = tf.concat([tf.zeros([minibatch_size, D_global_size]), delta_latents], axis=1)
# labels = training_set.get_random_labels_tf(minibatch_size)
# if own_I:
# fake1_out, atts = G.get_output_for(latents, labels, is_training=True, return_atts=True)
# else:
# fake1_out = G.get_output_for(latents, labels, is_training=True, return_atts=False)
# fake2_out = G.get_output_for(delta_latents, labels, is_training=True, return_atts=False)
labels = training_set.get_random_labels_tf(2*minibatch_size)
latents_all = tf.concat([latents, delta_latents], axis=0)
if own_I:
fake_all_out, atts_all = G.get_output_for(latents_all, labels, is_training=True, return_atts=True)
fake1_out, fake2_out = tf.split(fake_all_out, 2, axis=0)
atts = atts_all[:minibatch_size]
else:
fake_all_out = G.get_output_for(latents_all, labels, is_training=True)
fake1_out, fake2_out = tf.split(fake_all_out, 2, axis=0)
if I_info is not None:
fake_scores_out, hidden = D.get_output_for(fake1_out, labels, is_training=True)
else:
fake_scores_out = D.get_output_for(fake1_out, labels, is_training=True)
G_loss = tf.nn.softplus(-fake_scores_out) # -log(sigmoid(fake_scores_out))
if own_I:
regress_out = I.get_output_for(fake1_out, fake2_out, atts, is_training=True)
# regress_out = regress_out[:, ::-1]
else:
regress_out = I.get_output_for(fake1_out, fake2_out, is_training=True)
I_loss = calc_vc_loss(C_delta_latents, regress_out, D_global_size, C_global_size, D_lambda, C_lambda, delta_type)
# I_loss = calc_vc_loss(delta_target, regress_out, D_global_size, C_global_size, D_lambda, C_lambda)
I_loss = autosummary('Loss/I_loss', I_loss)
G_loss += I_loss
return G_loss, None
def calc_vc_byvae_loss(latents, delta_latents, reg1_out, reg2_out, C_delta_latents,
D_global_size, C_global_size, D_lambda, C_lambda, delta_type):
reg12_avg = 0.5 * (reg1_out + reg2_out)
var_mask = C_delta_latents > 0
reg1_out_hat = tf.where(var_mask, reg1_out, reg12_avg)
reg2_out_hat = tf.where(var_mask, reg2_out, reg12_avg)
I_loss1 = tf.reduce_sum(tf.math.squared_difference(latents, reg1_out_hat), axis=1)
I_loss2 = tf.reduce_sum(tf.math.squared_difference(delta_latents, reg2_out_hat), axis=1)
I_loss = 0.5 * (I_loss1 + I_loss2)
I_loss = autosummary('Loss/I_loss', I_loss)
I_loss *= C_lambda
return I_loss
def G_logistic_byvae_ns_vc2(G, D, I, opt, training_set, minibatch_size, DM=None, I_info=None, latent_type='uniform',
D_global_size=0, D_lambda=0, C_lambda=1, epsilon=0.4,
random_eps=False, delta_type='onedim', own_I=False,
use_cascade=False, cascade_dim=None):
_ = opt
discrete_latents = None
C_global_size = G.input_shapes[0][1]-D_global_size
if D_global_size > 0:
discrete_latents = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents = tf.one_hot(discrete_latents, D_global_size)
discrete_latents_2 = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents_2 = tf.one_hot(discrete_latents_2, D_global_size)
if latent_type == 'uniform':
latents = tf.random.uniform([minibatch_size] + [G.input_shapes[0][1]-D_global_size], minval=-2, maxval=2)
elif latent_type == 'normal':
latents = tf.random.normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
elif latent_type == 'trunc_normal':
latents = tf.random.truncated_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
else:
raise ValueError('Latent type not supported: ' + latent_type)
# Sample delta latents
if delta_type == 'onedim':
if use_cascade:
C_delta_latents = tf.cast(tf.one_hot(cascade_dim, C_global_size), latents.dtype)
C_delta_latents = tf.tile(C_delta_latents[tf.newaxis, :], [minibatch_size, 1])
print('after onehot, C_delta_latents.shape:', C_delta_latents.get_shape().as_list())
else:
C_delta_latents = tf.random.uniform([minibatch_size], minval=0, maxval=C_global_size, dtype=tf.int32)
C_delta_latents = tf.cast(tf.one_hot(C_delta_latents, C_global_size), latents.dtype)
elif delta_type == 'fulldim':
C_delta_latents = tf.random.uniform([minibatch_size, C_global_size], minval=0, maxval=1.0, dtype=latents.dtype)
if delta_type == 'onedim':
if not random_eps:
delta_target = C_delta_latents * epsilon
else:
epsilon = epsilon * tf.random.normal([minibatch_size, 1], mean=0.0, stddev=2.0)
delta_target = C_delta_latents * epsilon
else:
delta_target = (C_delta_latents - 0.5) * epsilon
delta_latents = delta_target + latents
if D_global_size > 0:
latents = tf.concat([discrete_latents, latents], axis=1)
delta_latents = tf.concat([tf.zeros([minibatch_size, D_global_size]), delta_latents], axis=1)
# labels = training_set.get_random_labels_tf(minibatch_size)
# if own_I:
# fake1_out, atts = G.get_output_for(latents, labels, is_training=True, return_atts=True)
# else:
# fake1_out = G.get_output_for(latents, labels, is_training=True, return_atts=False)
# fake2_out = G.get_output_for(delta_latents, labels, is_training=True, return_atts=False)
labels = training_set.get_random_labels_tf(2*minibatch_size)
latents_all = tf.concat([latents, delta_latents], axis=0)
if own_I:
fake_all_out, atts_all = G.get_output_for(latents_all, labels, is_training=True, return_atts=True)
atts = atts_all[:minibatch_size]
else:
fake_all_out = G.get_output_for(latents_all, labels, is_training=True, return_atts=False)
fake1_out, fake2_out = tf.split(fake_all_out, 2, axis=0)
if I_info is not None:
fake_scores_out, hidden = D.get_output_for(fake1_out, labels, is_training=True)
else:
fake_scores_out = D.get_output_for(fake1_out, labels, is_training=True)
G_loss = tf.nn.softplus(-fake_scores_out) # -log(sigmoid(fake_scores_out))
if own_I:
regress_out = I.get_output_for(fake_all_out, atts_all, is_training=True)
# regress_out = regress_out[:, ::-1]
else:
regress_out = I.get_output_for(fake_all_out, is_training=True)
reg1_out, reg2_out = tf.split(regress_out, 2, axis=0)
I_loss = calc_vc_byvae_loss(latents, delta_latents, reg1_out, reg2_out, C_delta_latents,
D_global_size, C_global_size, D_lambda, C_lambda, delta_type)
# I_loss = calc_vc_loss(delta_target, regress_out, D_global_size, C_global_size, D_lambda, C_lambda)
I_loss = autosummary('Loss/I_loss', I_loss)
G_loss += I_loss
return G_loss, None
def calc_regress_loss(clatents, pred_outs, D_global_size, C_global_size, D_lambda, C_lambda,
minibatch_size, norm_ord=2, n_dim_strict=0, loose_rate=0.2):
assert pred_outs.shape.as_list()[1] == (D_global_size + C_global_size)
# Continuous latents loss
# G2_loss_C = tf.reduce_sum((pred_outs[:] - clatents) ** 2, axis=1)
# Only n_dim_strict == full or 1 are supported now.
if n_dim_strict == 1:
# print('using n_dim_strict==1')
dropped_dim = tf.random.uniform([minibatch_size], minval=0, maxval=C_global_size, dtype=tf.int32)
dropped_dim = tf.cast(tf.one_hot(dropped_dim, C_global_size), pred_outs.dtype)
# pred_outs = pred_outs * (1 - dropped_dim)
# clatents = clatents * (1 - clatents)
else:
dropped_dim = tf.ones([minibatch_size, C_global_size], dtype=pred_outs.dtype)
# G2_loss_C = tf.norm(pred_outs - clatents, ord=norm_ord, axis=1)
G2_loss_C = tf.norm(dropped_dim * (pred_outs - clatents) + loose_rate * (1 - dropped_dim) * (pred_outs - clatents),
ord=norm_ord, axis=1)
G2_loss = C_lambda * G2_loss_C
return G2_loss
def calc_regress_grow_loss(clatents, pred_outs, D_global_size, C_global_size, D_lambda, C_lambda, opt_reset_ls):
assert pred_outs.shape.as_list()[1] == (D_global_size + C_global_size)
print('opt_reset_ls:', opt_reset_ls)
opt_reset_tf = tf.constant(opt_reset_ls[::-1], dtype=tf.float32)
opt_reset_tf_mask = tf.reshape(opt_reset_tf, [1, len(opt_reset_ls), 1])
opt_reset_tf_mask = tf.tile(opt_reset_tf_mask, [1, 1, C_global_size // len(opt_reset_ls)])
opt_reset_tf_mask = tf.reshape(opt_reset_tf_mask, [1, C_global_size])
g_step = tf.train.get_global_step()
opt_reset_tf_mask = opt_reset_tf_mask <= tf.cast(g_step, tf.float32)
opt_reset_tf_mask = tf.cast(opt_reset_tf_mask, dtype=clatents.dtype)
# Continuous latents loss
# squared = ((pred_outs - clatents) ** 2) * opt_reset_tf_mask
squared = ((pred_outs - clatents) ** 2) * 0
G2_loss_C = tf.reduce_sum(squared, axis=1)
G2_loss = C_lambda * G2_loss_C
return G2_loss
def calc_outlier_loss(outlier, pred_outs, D_global_size, C_global_size, D_lambda, C_lambda):
assert pred_outs.shape.as_list()[1] == (D_global_size + C_global_size)
# Continuous latents loss
G2_loss_C = tf.nn.softmax_cross_entropy_with_logits_v2(outlier, pred_outs, axis=1, name='outlier_loss')
G2_loss = C_lambda * G2_loss_C
return G2_loss
def calc_regress_and_att_loss(clatents, pred_outs, atts, gen_atts, D_global_size, C_global_size,
D_lambda, C_lambda, att_lambda):
assert pred_outs.shape.as_list()[1] == (D_global_size + C_global_size)
# Continuous latents loss
G2_loss_C_pred = tf.reduce_sum((pred_outs - clatents) ** 2, axis=1)
G2_loss_pred = C_lambda * G2_loss_C_pred
G2_loss_pred = autosummary('Loss/G2_loss_pred', G2_loss_pred)
# Continuous gen_atts loss
G2_loss_C_atts = tf.reduce_mean((gen_atts - atts) ** 2, axis=[2,3,4])
G2_loss_C_atts = tf.reduce_sum(G2_loss_C_atts, axis=1)
G2_loss_atts = att_lambda * G2_loss_C_atts
G2_loss_atts = autosummary('Loss/G2_loss_atts', G2_loss_atts)
G2_loss = G2_loss_pred + G2_loss_atts
return G2_loss
def G_logistic_ns_vc2_info_gan(G, D, opt, training_set, minibatch_size, DM=None, I_info=None,
latent_type='uniform', D_global_size=0, D_lambda=0,
C_lambda=1, epsilon=0.4, random_eps=False, delta_type='onedim',
own_I=False, is_G2_loss=False, outlier_detector=False,
gen_atts_in_D=False, att_lambda=0):
_ = opt
discrete_latents = None
C_global_size = G.input_shapes[0][1]-D_global_size
if D_global_size > 0:
discrete_latents = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents = tf.one_hot(discrete_latents, D_global_size)
if latent_type == 'uniform':
clatents = tf.random.uniform([minibatch_size] + [G.input_shapes[0][1]-D_global_size], minval=-2, maxval=2)
elif latent_type == 'normal':
clatents = tf.random.normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
elif latent_type == 'trunc_normal':
clatents = tf.random.truncated_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
else:
raise ValueError('Latent type not supported: ' + latent_type)
print('Outlier_detector=', outlier_detector)
if is_G2_loss:
if outlier_detector:
outlier = tf.random.uniform([minibatch_size], minval=0, maxval=C_global_size, dtype=tf.int32)
outlier = tf.cast(tf.one_hot(outlier, C_global_size), clatents.dtype)
outlier_clatents = tf.random.normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
outlier_clatents = outlier_clatents/3. + 2*outlier_clatents/tf.math.abs(outlier_clatents)
outlier = outlier > 0
clatents = tf.where(outlier, outlier_clatents, clatents)
if D_global_size > 0:
latents = tf.concat([discrete_latents, clatents], axis=1)
else:
latents = clatents
labels = training_set.get_random_labels_tf(2*minibatch_size)
fake_out, atts = G.get_output_for(latents, labels, is_training=True, return_atts=True)
if is_G2_loss:
if gen_atts_in_D:
fake_scores_out, pred_outs, gen_atts = D.get_output_for(fake_out, labels, atts, is_training=True,
gen_atts_in_D=True)
pred_outs = pred_outs[:, ::-1]
gen_atts = gen_atts[:, ::-1]
else:
fake_scores_out, pred_outs = D.get_output_for(fake_out, labels, atts, is_training=True)
pred_outs = pred_outs[:, ::-1]
else:
fake_scores_out = D.get_output_for(fake_out, labels, atts, is_training=True, return_preds=False)
if is_G2_loss:
if not outlier_detector:
if gen_atts_in_D:
G2_loss = calc_regress_and_att_loss(clatents, pred_outs, atts, gen_atts,
D_global_size, C_global_size,
D_lambda, C_lambda, att_lambda)
else:
G2_loss = calc_regress_loss(clatents, pred_outs, D_global_size, C_global_size, D_lambda, C_lambda, minibatch_size)
else:
G2_loss = calc_outlier_loss(outlier, pred_outs, D_global_size, C_global_size, D_lambda, C_lambda)
G2_loss = autosummary('Loss/G2_loss', G2_loss)
return G2_loss, None
else:
G_loss = tf.nn.softplus(-fake_scores_out) # -log(sigmoid(fake_scores_out))
return G_loss, None
def G_logistic_ns_vc2_info_gan2(G, D, I, opt, training_set, minibatch_size, DM=None,
latent_type='uniform', D_global_size=0, D_lambda=0,
C_lambda=1, norm_ord=2, n_dim_strict=0, loose_rate=0.2):
_ = opt
discrete_latents = None
C_global_size = G.input_shapes[0][1]-D_global_size
if D_global_size > 0:
discrete_latents = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents = tf.one_hot(discrete_latents, D_global_size)
if latent_type == 'uniform':
clatents = tf.random.uniform([minibatch_size] + [G.input_shapes[0][1]-D_global_size], minval=-2, maxval=2)
elif latent_type == 'normal':
clatents = tf.random.normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
elif latent_type == 'trunc_normal':
clatents = tf.random.truncated_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
else:
raise ValueError('Latent type not supported: ' + latent_type)
if D_global_size > 0:
latents = tf.concat([discrete_latents, clatents], axis=1)
else:
latents = clatents
labels = training_set.get_random_labels_tf(minibatch_size)
fake_out = G.get_output_for(latents, labels, is_training=True, return_atts=False)
fake_scores_out = D.get_output_for(fake_out, labels, is_training=True)
G_loss = tf.nn.softplus(-fake_scores_out) # -log(sigmoid(fake_scores_out))
regress_out = I.get_output_for(fake_out, is_training=True)
# I_loss = calc_regress_grow_loss(clatents, regress_out, D_global_size, C_global_size, D_lambda, C_lambda, opt_reset_ls)
I_loss = calc_regress_loss(clatents, regress_out, D_global_size, C_global_size, D_lambda, C_lambda,
minibatch_size, norm_ord=norm_ord, n_dim_strict=n_dim_strict, loose_rate=loose_rate)
I_loss = autosummary('Loss/I_loss', I_loss)
G_loss += I_loss
return G_loss, None
def D_logistic_r1_vc2(G, D, opt, training_set, minibatch_size, reals, labels, gamma=10.0, latent_type='uniform', D_global_size=0):
_ = opt, training_set
discrete_latents = None
if D_global_size > 0:
discrete_latents = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents = tf.one_hot(discrete_latents, D_global_size)
if latent_type == 'uniform':
latents = tf.random.uniform([minibatch_size] + [G.input_shapes[0][1]-D_global_size], minval=-2, maxval=2)
elif latent_type == 'normal':
latents = tf.random_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
elif latent_type == 'trunc_normal':
latents = tf.random.truncated_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
else:
raise ValueError('Latent type not supported: ' + latent_type)
if D_global_size > 0:
latents = tf.concat([discrete_latents, latents], axis=1)
fake_images_out = get_return_v(G.get_output_for(latents, labels, is_training=True, return_atts=False), 1)
real_scores_out = get_return_v(D.get_output_for(reals, labels, is_training=True), 1)
fake_scores_out = get_return_v(D.get_output_for(fake_images_out, labels, is_training=True), 1)
real_scores_out = autosummary('Loss/scores/real', real_scores_out)
fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out)
loss = tf.nn.softplus(fake_scores_out) # -log(1-sigmoid(fake_scores_out))
loss += tf.nn.softplus(-real_scores_out) # -log(sigmoid(real_scores_out)) # pylint: disable=invalid-unary-operand-type
with tf.name_scope('GradientPenalty'):
real_grads = tf.gradients(tf.reduce_sum(real_scores_out), [reals])[0]
gradient_penalty = tf.reduce_sum(tf.square(real_grads), axis=[1,2,3])
gradient_penalty = autosummary('Loss/gradient_penalty', gradient_penalty)
reg = gradient_penalty * (gamma * 0.5)
return loss, reg
def D_logistic_r1_vc2_info_gan(G, D, opt, training_set, minibatch_size, reals, labels, gamma=10.0, latent_type='uniform', D_global_size=0):
_ = opt, training_set
discrete_latents = None
if D_global_size > 0:
discrete_latents = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents = tf.one_hot(discrete_latents, D_global_size)
if latent_type == 'uniform':
latents = tf.random.uniform([minibatch_size] + [G.input_shapes[0][1]-D_global_size], minval=-2, maxval=2)
elif latent_type == 'normal':
latents = tf.random_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
elif latent_type == 'trunc_normal':
latents = tf.random.truncated_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
else:
raise ValueError('Latent type not supported: ' + latent_type)
if D_global_size > 0:
latents = tf.concat([discrete_latents, latents], axis=1)
fake_images_out, atts = G.get_output_for(latents, labels, is_training=True)
real_scores_out = D.get_output_for(reals, labels, atts, is_training=True, return_preds=False)
fake_scores_out = D.get_output_for(fake_images_out, labels, atts, is_training=True, return_preds=False)
real_scores_out = autosummary('Loss/scores/real', real_scores_out)
fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out)
loss = tf.nn.softplus(fake_scores_out) # -log(1-sigmoid(fake_scores_out))
loss += tf.nn.softplus(-real_scores_out) # -log(sigmoid(real_scores_out)) # pylint: disable=invalid-unary-operand-type
with tf.name_scope('GradientPenalty'):
real_grads = tf.gradients(tf.reduce_sum(real_scores_out), [reals])[0]
gradient_penalty = tf.reduce_sum(tf.square(real_grads), axis=[1,2,3])
gradient_penalty = autosummary('Loss/gradient_penalty', gradient_penalty)
reg = gradient_penalty * (gamma * 0.5)
return loss, reg
def D_logistic_r1_vc2_info_gan2(G, D, opt, training_set, minibatch_size, reals, labels, gamma=10.0, latent_type='uniform', D_global_size=0):
_ = opt, training_set
discrete_latents = None
if D_global_size > 0:
discrete_latents = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents = tf.one_hot(discrete_latents, D_global_size)
if latent_type == 'uniform':
latents = tf.random.uniform([minibatch_size] + [G.input_shapes[0][1]-D_global_size], minval=-2, maxval=2)
elif latent_type == 'normal':
latents = tf.random_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
elif latent_type == 'trunc_normal':
latents = tf.random.truncated_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
else:
raise ValueError('Latent type not supported: ' + latent_type)
if D_global_size > 0:
latents = tf.concat([discrete_latents, latents], axis=1)
fake_images_out = G.get_output_for(latents, labels, is_training=True, return_atts=False)
real_scores_out = D.get_output_for(reals, labels, is_training=True)
fake_scores_out = D.get_output_for(fake_images_out, labels, is_training=True)
real_scores_out = autosummary('Loss/scores/real', real_scores_out)
fake_scores_out = autosummary('Loss/scores/fake', fake_scores_out)
loss = tf.nn.softplus(fake_scores_out) # -log(1-sigmoid(fake_scores_out))
loss += tf.nn.softplus(-real_scores_out) # -log(sigmoid(real_scores_out)) # pylint: disable=invalid-unary-operand-type
with tf.name_scope('GradientPenalty'):
real_grads = tf.gradients(tf.reduce_sum(real_scores_out), [reals])[0]
gradient_penalty = tf.reduce_sum(tf.square(real_grads), axis=[1,2,3])
gradient_penalty = autosummary('Loss/gradient_penalty', gradient_penalty)
reg = gradient_penalty * (gamma * 0.5)
return loss, reg
def G_logistic_ns_vc2_traversal_contrastive(G, D, DM, opt, training_set, minibatch_size, I_info=None, latent_type='uniform',
n_neg_samples=1, D_global_size=0, D_lambda=0, C_lambda=1, epsilon=0.4,
random_eps=False, delta_type='onedim', own_I=False, temperature=1.):
_ = opt
discrete_latents = None
C_global_size = G.input_shapes[0][1]-D_global_size
if D_global_size > 0:
discrete_latents = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents = tf.one_hot(discrete_latents, D_global_size)
discrete_latents_2 = tf.random.uniform([minibatch_size], minval=0, maxval=D_global_size, dtype=tf.int32)
discrete_latents_2 = tf.one_hot(discrete_latents_2, D_global_size)
if latent_type == 'uniform':
latents = tf.random.uniform([minibatch_size] + [G.input_shapes[0][1]-D_global_size], minval=-2, maxval=2)
elif latent_type == 'normal':
latents = tf.random.normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
elif latent_type == 'trunc_normal':
latents = tf.random.truncated_normal([minibatch_size] + [G.input_shapes[0][1]-D_global_size])
else:
raise ValueError('Latent type not supported: ' + latent_type)
# Sample delta latents
C_delta_latents = tf.random.uniform([minibatch_size], minval=0, maxval=C_global_size, dtype=tf.int32)
C_delta_latents = tf.cast(tf.one_hot(C_delta_latents, C_global_size), latents.dtype)
epsilon = epsilon * tf.random.normal([minibatch_size, 1], mean=0.0, stddev=2.0)
delta_target = C_delta_latents * epsilon
delta_latents = delta_target + latents
neg_latents_ls = []
for i in range(n_neg_samples):
delta_other_dir_free = tf.random.normal([minibatch_size, C_global_size])
delta_other_dir, _ = tf.linalg.normalize(delta_other_dir_free, axis=1)
delta_other_dir_target = delta_other_dir * epsilon
delta_other_dir_latents = delta_other_dir_target + latents
neg_latents_ls.append(delta_other_dir_latents)
neg_latents = tf.reshape(tf.concat(neg_latents_ls, axis=1), [-1, C_global_size])
labels = training_set.get_random_labels_tf(minibatch_size * (n_neg_samples + 2))
latents_all = tf.concat([latents, delta_latents, neg_latents], axis=0)
fake_all_out = G.get_output_for(latents_all, labels, is_training=True)
fake_scores_out = D.get_output_for(fake_all_out[:minibatch_size, ...], labels[:minibatch_size], is_training=True)
G_loss = tf.nn.softplus(-fake_scores_out) # -log(sigmoid(fake_scores_out))
fake_all_out = (fake_all_out + 1) * (255 / 2) # Set dynamic_range for VGG.
fake_ori = fake_all_out[:minibatch_size, ...]
fake_pos = fake_all_out[minibatch_size: 2*minibatch_size, ...]
fake_negs = fake_all_out[2*minibatch_size:, ...]
# print('fake_ori.shape:', fake_ori.get_shape().as_list())
# print('fake_pos.shape:', fake_pos.get_shape().as_list())
# print('fake_negs.shape:', fake_negs.get_shape().as_list())
scores_pos = DM.get_output_for(fake_ori, fake_pos)[:, tf.newaxis] # [b, 1]
# print('scores_pos.shape:', scores_pos.get_shape().as_list())
scores_negs = DM.get_output_for(tf.tile(fake_ori, [n_neg_samples, 1, 1, 1]), fake_negs) # [b * n_negs]
# print('scores_negs.shape:', scores_negs.get_shape().as_list())
scores_negs = tf.reshape(scores_negs, [minibatch_size, n_neg_samples]) # [b, n_negs]
# print('after reshape scores_negs.shape:', scores_negs.get_shape().as_list())
scores_all = tf.concat([scores_pos, scores_negs], axis=1) # [b, n_negs + 1]
contrastive_loss = - tf.log(tf.exp((1. - scores_pos[:,0]) / temperature) / tf.reduce_sum(tf.exp((1. - scores_all) / temperature), axis=1))
contrastive_loss = autosummary('Loss/contrastive_loss', contrastive_loss)
G_loss += C_lambda * contrastive_loss
return G_loss, None
| 52.662752 | 142 | 0.693217 | 4,772 | 31,387 | 4.192372 | 0.054484 | 0.070979 | 0.054434 | 0.030391 | 0.854094 | 0.809657 | 0.782115 | 0.769519 | 0.76502 | 0.746176 | 0 | 0.019704 | 0.188295 | 31,387 | 595 | 143 | 52.751261 | 0.765553 | 0.105585 | 0 | 0.673961 | 0 | 0 | 0.037801 | 0.003787 | 0 | 0 | 0 | 0 | 0.010941 | 1 | 0.037199 | false | 0 | 0.010941 | 0 | 0.087527 | 0.006565 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2abcc052539411d1d082fffb7be4b204bf261d97 | 24 | py | Python | MainModule/AlphaModel/MachineLearningModel/HyperParameterOptimizer.py | sarang2dan/pytrader | 55930b4f3efb8c18c4fce0d3adacdc26a2abc7ab | [
"MIT"
] | null | null | null | MainModule/AlphaModel/MachineLearningModel/HyperParameterOptimizer.py | sarang2dan/pytrader | 55930b4f3efb8c18c4fce0d3adacdc26a2abc7ab | [
"MIT"
] | null | null | null | MainModule/AlphaModel/MachineLearningModel/HyperParameterOptimizer.py | sarang2dan/pytrader | 55930b4f3efb8c18c4fce0d3adacdc26a2abc7ab | [
"MIT"
] | null | null | null | # todo: delete this file | 24 | 24 | 0.75 | 4 | 24 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0.916667 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2ade65a035dc3954a7d9b7d1c6638fccfdc4135b | 8,262 | py | Python | quantiphyse/test/slice_plane_test.py | physimals/quantiphyse | bd3be0098b9929b1987fe0f23e515fa70674b3d4 | [
"Apache-2.0"
] | 9 | 2021-02-01T06:44:31.000Z | 2022-01-17T15:46:40.000Z | quantiphyse/test/slice_plane_test.py | ibme-qubic/quantiphyse | 34f40424941414ce139c4612a903de3f24883576 | [
"Apache-2.0"
] | 34 | 2019-02-04T10:47:02.000Z | 2020-08-13T09:36:52.000Z | quantiphyse/test/slice_plane_test.py | physimals/quantiphyse | bd3be0098b9929b1987fe0f23e515fa70674b3d4 | [
"Apache-2.0"
] | 2 | 2021-02-21T01:46:04.000Z | 2021-11-15T10:55:26.000Z | """
Quantiphyse - Tests for OrthoSlice class
Copyright (c) 2013-2020 University of Oxford
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import unittest
import numpy as np
from quantiphyse.data import DataGrid, NumpyData, OrthoSlice
GRIDSIZE = 5
SLICEPOS = 2
XAXIS, YAXIS, ZAXIS = 0, 1, 2
class OrthoSliceTest(unittest.TestCase):
def testOrthoY(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
YD, XD, ZD = np.meshgrid(range(GRIDSIZE), range(GRIDSIZE), range(GRIDSIZE))
plane = OrthoSlice(grid, YAXIS, SLICEPOS)
self.assertEquals(tuple(plane.origin), (0, SLICEPOS, 0))
self.assertEquals(len(plane.basis), 2)
self.assertTrue((1, 0, 0) in plane.basis)
self.assertTrue((0, 0, 1) in plane.basis)
def testOrthoX(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
plane = OrthoSlice(grid, XAXIS, SLICEPOS)
self.assertEquals(tuple(plane.origin), (SLICEPOS, 0, 0))
self.assertEquals(len(plane.basis), 2)
self.assertTrue((0, 1, 0) in plane.basis)
self.assertTrue((0, 0, 1) in plane.basis)
def testOrthoZ(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
plane = OrthoSlice(grid, ZAXIS, SLICEPOS)
self.assertEquals(tuple(plane.origin), (0, 0, SLICEPOS))
self.assertEquals(len(plane.basis), 2)
self.assertTrue((0, 1, 0) in plane.basis)
self.assertTrue((1, 0, 0) in plane.basis)
def testGenericX(self):
trans = np.array([
[0.3, 0.2, 1.7, 0],
[0.1, 2.1, 0.11, 0],
[2.2, 0.7, 0.3, 0],
[0, 0, 0, 1]
])
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), trans)
origin = list(SLICEPOS * trans[:3,0])
plane = OrthoSlice(grid, XAXIS, SLICEPOS)
self.assertAlmostEquals(list(plane.origin), origin)
self.assertEquals(len(plane.basis), 2)
self.assertTrue(tuple(trans[:3, 2]) in plane.basis)
self.assertTrue(tuple(trans[:3, 1]) in plane.basis)
def testGenericY(self):
trans = np.array([
[0.3, 0.2, 1.7, 0],
[0.1, 2.1, 0.11, 0],
[2.2, 0.7, 0.3, 0],
[0, 0, 0, 1]
])
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), trans)
origin = list(SLICEPOS * trans[:3,1])
plane = OrthoSlice(grid, YAXIS, SLICEPOS)
self.assertAlmostEquals(list(plane.origin), origin)
self.assertEquals(len(plane.basis), 2)
self.assertTrue(tuple(trans[:3, 0]) in plane.basis)
self.assertTrue(tuple(trans[:3, 2]) in plane.basis)
def testGenericZ(self):
trans = np.array([
[0.3, 0.2, 1.7, 0],
[0.1, 2.1, 0.11, 0],
[2.2, 0.7, 0.3, 0],
[0, 0, 0, 1]
])
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), trans)
origin = list(SLICEPOS * trans[:3,2])
plane = OrthoSlice(grid, ZAXIS, SLICEPOS)
self.assertAlmostEquals(list(plane.origin), origin)
self.assertEquals(len(plane.basis), 2)
self.assertTrue(tuple(trans[:3, 0]) in plane.basis)
self.assertTrue(tuple(trans[:3, 1]) in plane.basis)
"""
def testSliceIdenticalZ(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
plane = OrthoSlice(grid, ZAXIS, SLICEPOS)
data = np.random.rand(*grid.shape)
plane.slice_data(data, grid)
def testSliceIdenticalY(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
plane = OrthoSlice(grid, YAXIS, SLICEPOS)
data = np.random.rand(*grid.shape)
plane.slice_data(data, grid)
"""
def testHighRes(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
plane = OrthoSlice(grid, YAXIS, SLICEPOS)
data = np.random.rand(GRIDSIZE*2, GRIDSIZE*2, GRIDSIZE*2)
datagrid = DataGrid((GRIDSIZE*2, GRIDSIZE*2, GRIDSIZE*2), np.identity(4)/2)
qpd = NumpyData(data, name="test", grid=datagrid)
qpd.slice_data(plane)
def testOrtho(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
YD, XD, ZD = np.meshgrid(range(GRIDSIZE), range(GRIDSIZE), range(GRIDSIZE))
plane = OrthoSlice(grid, YAXIS, SLICEPOS)
xdata, _, _, _ = NumpyData(XD, name="test", grid=grid).slice_data(plane)
ydata, _, _, _ = NumpyData(YD, name="test", grid=grid).slice_data(plane)
zdata, _, _, _ = NumpyData(ZD, name="test", grid=grid).slice_data(plane)
self.assertTrue(np.all(ydata == SLICEPOS))
for x in range(GRIDSIZE):
self.assertTrue(np.all(xdata[x,:] == x))
self.assertTrue(np.all(zdata[:,x] == x))
def testOrthoSwapAxis(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
plane = OrthoSlice(grid, YAXIS, SLICEPOS)
# Swap Y and Z axes
affine = np.array([
[1, 0, 0, 0],
[0, 0, 1, 0],
[0, 1, 0, 0],
[0, 0, 0, 1]
])
datagrid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), affine)
YD, XD, ZD = np.meshgrid(range(GRIDSIZE), range(GRIDSIZE), range(GRIDSIZE))
xdata, _, _, _ = NumpyData(XD, name="test", grid=datagrid).slice_data(plane)
ydata, _, _, _ = NumpyData(YD, name="test", grid=datagrid).slice_data(plane)
zdata, _, _, _ = NumpyData(ZD, name="test", grid=datagrid).slice_data(plane)
self.assertTrue(np.all(zdata == SLICEPOS))
for x in range(GRIDSIZE):
self.assertTrue(np.all(xdata[x,:] == x))
self.assertTrue(np.all(ydata[:,x] == x))
def testOrthoReversed(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
plane = OrthoSlice(grid, YAXIS, SLICEPOS)
# Invert Z axis
affine = np.array([
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, -1, GRIDSIZE-1],
[0, 0, 0, 1]
])
datagrid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), affine)
YD, XD, ZD = np.meshgrid(range(GRIDSIZE), range(GRIDSIZE), range(GRIDSIZE))
xdata, _, _, _ = NumpyData(XD, name="test", grid=datagrid).slice_data(plane)
ydata, _, _, _ = NumpyData(YD, name="test", grid=datagrid).slice_data(plane)
zdata, _, transv, offset = NumpyData(ZD, name="test", grid=datagrid).slice_data(plane)
# Reversal is reflected in the transformation
self.assertTrue(np.all(transv == [[1, 0], [0, -1]]))
self.assertTrue(np.all(ydata == SLICEPOS))
for x in range(GRIDSIZE):
self.assertTrue(np.all(xdata[x,:] == x))
self.assertTrue(np.all(zdata[:,x] == x))
def testOrthoOffset(self):
grid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), np.identity(4))
plane = OrthoSlice(grid, YAXIS, SLICEPOS)
# Offset X axis
affine = np.array([
[1, 0, 0, 2],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]
])
datagrid = DataGrid((GRIDSIZE, GRIDSIZE, GRIDSIZE), affine)
YD, XD, ZD = np.meshgrid(range(GRIDSIZE), range(GRIDSIZE), range(GRIDSIZE))
xdata, _, _, _ = NumpyData(XD, name="test", grid=datagrid).slice_data(plane)
ydata, _, _, _ = NumpyData(YD, name="test", grid=datagrid).slice_data(plane)
zdata, _, transv, offset = NumpyData(ZD, name="test", grid=datagrid).slice_data(plane)
self.assertTrue(np.all(ydata == SLICEPOS))
for x in range(GRIDSIZE):
self.assertTrue(np.all(xdata[x,:] == x))
self.assertTrue(np.all(zdata[:,x] == x))
if __name__ == '__main__':
unittest.main()
| 38.971698 | 94 | 0.592229 | 1,050 | 8,262 | 4.607619 | 0.140952 | 0.017776 | 0.013022 | 0.105829 | 0.780074 | 0.778007 | 0.739355 | 0.714345 | 0.704423 | 0.674659 | 0 | 0.03481 | 0.25938 | 8,262 | 211 | 95 | 39.156398 | 0.755842 | 0.084967 | 0 | 0.732877 | 0 | 0 | 0.008495 | 0 | 0 | 0 | 0 | 0 | 0.253425 | 1 | 0.075342 | false | 0 | 0.020548 | 0 | 0.10274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6303229429a9331b3524ad95522d176b87a41486 | 189 | py | Python | src/app/main/admin.py | OlegBugaichuk/site_template | 7094ef7cfa0c487ac124f94642e5449dfa6d0dbb | [
"MIT"
] | null | null | null | src/app/main/admin.py | OlegBugaichuk/site_template | 7094ef7cfa0c487ac124f94642e5449dfa6d0dbb | [
"MIT"
] | null | null | null | src/app/main/admin.py | OlegBugaichuk/site_template | 7094ef7cfa0c487ac124f94642e5449dfa6d0dbb | [
"MIT"
] | null | null | null | from django.contrib import admin
from solo.admin import SingletonModelAdmin
from .models import MainPageDB
@admin.register(MainPageDB)
class MainPageAdmin(SingletonModelAdmin):
pass
| 18.9 | 42 | 0.825397 | 21 | 189 | 7.428571 | 0.619048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121693 | 189 | 9 | 43 | 21 | 0.939759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.5 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
630ba25f20e366903e249dce970ded14e1bb22f2 | 99 | py | Python | functions/modules/making_pizzas.py | nv-krishna/python-crash-course | d481faeb2196712cd52ca1d34dc1fe967d13712f | [
"Apache-2.0"
] | 2 | 2020-11-02T05:52:33.000Z | 2021-06-09T01:28:22.000Z | functions/modules/making_pizzas.py | nv-krishna/python-crash-course | d481faeb2196712cd52ca1d34dc1fe967d13712f | [
"Apache-2.0"
] | null | null | null | functions/modules/making_pizzas.py | nv-krishna/python-crash-course | d481faeb2196712cd52ca1d34dc1fe967d13712f | [
"Apache-2.0"
] | 2 | 2021-04-08T05:26:04.000Z | 2021-06-09T01:28:23.000Z | import pizza
pizza.make_pizza(12, "cheese")
pizza.make_pizza(16,"pepperoni","sausages","tomatoes") | 24.75 | 54 | 0.767677 | 14 | 99 | 5.285714 | 0.642857 | 0.243243 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.050505 | 99 | 4 | 54 | 24.75 | 0.744681 | 0 | 0 | 0 | 0 | 0 | 0.31 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2d497fd37e44212fbb241a980ae462c95f99e02f | 245 | py | Python | octopod/__init__.py | sreeshnair/octopod | c4d26c19735dff7c386338324a7ba1fd56ffbdab | [
"BSD-3-Clause"
] | 27 | 2020-04-13T20:07:31.000Z | 2020-06-11T09:08:32.000Z | octopod/__init__.py | sreeshnair/octopod | c4d26c19735dff7c386338324a7ba1fd56ffbdab | [
"BSD-3-Clause"
] | 24 | 2020-07-09T15:43:10.000Z | 2022-03-08T18:24:25.000Z | octopod/__init__.py | sreeshnair/octopod | c4d26c19735dff7c386338324a7ba1fd56ffbdab | [
"BSD-3-Clause"
] | 9 | 2020-11-02T16:33:12.000Z | 2022-03-05T00:21:40.000Z | from ._version import __version__
from octopod.dataloader import MultiDatasetLoader
from octopod.ensemble import *
from octopod.learner import MultiTaskLearner, MultiInputMultiTaskLearner
from octopod.text import *
from octopod.vision import *
| 30.625 | 72 | 0.853061 | 27 | 245 | 7.555556 | 0.444444 | 0.269608 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106122 | 245 | 7 | 73 | 35 | 0.931507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2d8f40ed0bd683a1b39ebd9efb392ea9b9c5232e | 166 | py | Python | pyseeta/__init__.py | Kite0011/pyseeta | 078a5c2457dba9f7bd67201e224403be489ccf76 | [
"MIT"
] | 98 | 2017-04-15T18:34:53.000Z | 2020-12-07T09:16:25.000Z | pyseeta/__init__.py | Kite0011/pyseeta | 078a5c2457dba9f7bd67201e224403be489ccf76 | [
"MIT"
] | 15 | 2017-03-28T05:04:26.000Z | 2021-09-28T11:20:32.000Z | pyseeta/__init__.py | Kite0011/pyseeta | 078a5c2457dba9f7bd67201e224403be489ccf76 | [
"MIT"
] | 26 | 2017-04-25T06:06:26.000Z | 2021-03-06T15:35:31.000Z |
__all__ = ['aligner', 'identifier', 'detector']
from pyseeta.detector import Detector
from pyseeta.aligner import Aligner
from pyseeta.identifier import Identifier
| 23.714286 | 47 | 0.801205 | 19 | 166 | 6.789474 | 0.368421 | 0.255814 | 0.294574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114458 | 166 | 6 | 48 | 27.666667 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2dfdb75d9bd897036524a29f6c86fac1865fa215 | 365 | py | Python | Darlington/phase1/python Basic 2/day 25 solution/qtn7.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Darlington/phase1/python Basic 2/day 25 solution/qtn7.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Darlington/phase1/python Basic 2/day 25 solution/qtn7.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | #program to check whether a given employee code is exactly 8 digits or 12 digits.
def is_valid_emp_code(emp_code):
return len(emp_code) in [8, 12] and emp_code.isdigit()
print(is_valid_emp_code('12345678'))
print(is_valid_emp_code('1234567j'))
print(is_valid_emp_code('12345678j'))
print(is_valid_emp_code('123456789123'))
print(is_valid_emp_code('123456abcdef')) | 45.625 | 81 | 0.8 | 63 | 365 | 4.301587 | 0.444444 | 0.232472 | 0.221402 | 0.309963 | 0.350554 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140299 | 0.082192 | 365 | 8 | 82 | 45.625 | 0.668657 | 0.219178 | 0 | 0 | 0 | 0 | 0.17193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0.142857 | 0.285714 | 0.714286 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
931911b827c0954766ec8f6ecfcfa9d3e10c008c | 83 | py | Python | mmorpg/old/Model/Direction/ZDirection/zdirection.py | InnovAnon-Inc/MAiZE | 6b7b266d85f8932557013e3c32bcc728c53f616f | [
"Unlicense"
] | null | null | null | mmorpg/old/Model/Direction/ZDirection/zdirection.py | InnovAnon-Inc/MAiZE | 6b7b266d85f8932557013e3c32bcc728c53f616f | [
"Unlicense"
] | null | null | null | mmorpg/old/Model/Direction/ZDirection/zdirection.py | InnovAnon-Inc/MAiZE | 6b7b266d85f8932557013e3c32bcc728c53f616f | [
"Unlicense"
] | null | null | null | from Model.Direction.direction import Direction
class ZDirection (Direction): pass | 27.666667 | 47 | 0.843373 | 10 | 83 | 7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 83 | 3 | 48 | 27.666667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
931c7a08fad91c9d9674661c8abb6799ba80bb2f | 41 | py | Python | util/__init__.py | growlxy/CSAnalysis | 54de4abfadb78b2a4e78c88bcc7dd3354370dbea | [
"Apache-2.0"
] | null | null | null | util/__init__.py | growlxy/CSAnalysis | 54de4abfadb78b2a4e78c88bcc7dd3354370dbea | [
"Apache-2.0"
] | null | null | null | util/__init__.py | growlxy/CSAnalysis | 54de4abfadb78b2a4e78c88bcc7dd3354370dbea | [
"Apache-2.0"
] | null | null | null | from .csv_name_getter import get_csv_name | 41 | 41 | 0.902439 | 8 | 41 | 4.125 | 0.75 | 0.424242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.868421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
932b7d34be33dc780eb39c1c57f6697697aa6e23 | 455 | py | Python | python/testData/refactoring/inlinelocal/operatorPrecedence/matrixMultiplication.after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/inlinelocal/operatorPrecedence/matrixMultiplication.after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/inlinelocal/operatorPrecedence/matrixMultiplication.after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | (y @ z)[::-5]
(y @ z)[5]
(y @ z)(5)
(y @ z).foo
-(y @ z)
+(y @ z)
~(y @ z)
5 ** (y @ z)
(y @ z) ** 5
5 * y @ z
y @ z * 5
5 / (y @ z)
y @ z / 5
5 // (y @ z)
y @ z // 5
5 + y @ z
y @ z + 5
y @ z - 5
5 - y @ z
5 >> y @ z
y @ z << 5
5 & y @ z
y @ z & 5
5 ^ y @ z
y @ z ^ 5
5 | y @ z
y @ z | 5
() in y @ z
y @ z in ()
5 is y @ z
y @ z is 5
5 < y @ z
y @ z < 5
not y @ z
5 and y @ z
y @ z and 5
5 or y @ z
y @ z or 5
y @ z if y @ z else y @ z
| 7.844828 | 25 | 0.314286 | 125 | 455 | 1.144 | 0.088 | 0.573427 | 0.356643 | 0.447552 | 0.573427 | 0.573427 | 0.559441 | 0.482517 | 0.405594 | 0.405594 | 0 | 0.123506 | 0.448352 | 455 | 57 | 26 | 7.982456 | 0.446215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9346c28beb385b41e2333d28c935f2a53c278391 | 9,146 | py | Python | emailapp/sql_helpers/ready_to_send_email_helper.py | manisharmagarg/Email_Management | 4241d3e0970558ea8a650b424a3cdb4b5a009149 | [
"Apache-2.0"
] | null | null | null | emailapp/sql_helpers/ready_to_send_email_helper.py | manisharmagarg/Email_Management | 4241d3e0970558ea8a650b424a3cdb4b5a009149 | [
"Apache-2.0"
] | null | null | null | emailapp/sql_helpers/ready_to_send_email_helper.py | manisharmagarg/Email_Management | 4241d3e0970558ea8a650b424a3cdb4b5a009149 | [
"Apache-2.0"
] | null | null | null | from .database import Database
class ReadyToSendEmailsHelper(Database):
def __init__(self, *args):
super(ReadyToSendEmailsHelper, self).__init__(*args)
def add_ready_to_emails(self, email_address, campaign_id, html_template, subject, status):
data = {"email_address": email_address, "campaign_id": campaign_id, "html_template": html_template,
"subject": subject, "status": status}
self.insert("ready_to_send_emails", data)
# query = "INSERT INTO ready_to_send_emails(email_address, campaign_id, html_template, subject, status) " \
# "VALUE (%s, %s, %s, %s, %s)"
#
# self.add(query, (email_address, campaign_id, html_template, subject, status))
def get_all_emails_to_queued(self, campaign_id):
fields = ('id', 'email_address', 'email_address', 'campaign_id', 'template_html', 'subject',
'status', 'list_segment_id', 'created_on')
where = ('status=%s or status=%s ', ['READY_TO_SEND', ' '])
return self.getAll("ready_to_send_emails", fields, where)
# query = "Select * from ready_to_send_emails " \
# "where (status='READY_TO_SEND' or status='QUEUED') and campaign_id=%s;" % campaign_id
# return self.fetch_all(query)
def get_total_email_in_segment(self, list_id):
fields = ('id', 'email', 'list_id', 'user_id', 'created_on')
where = ('list_id=%s', [list_id])
return self.getAll("list_segments", fields, where)
# query = "Select * from list_segments where list_id=%s" % list_id
# return self.fetch_all(query)
def get_error_send_emails(self, campaign_id):
fields = ('id', 'email_address', 'email_address', 'campaign_id', 'template_html', 'subject',
'status', 'list_segment_id', 'created_on')
where = ('campaign_id=%s and status=%s ', [campaign_id, 'SENT'])
return self.getAll("ready_to_send_emails", fields, where)
# query = "Select * from ready_to_send_emails " \
# "where status='SENT' and campaign_id=%s;" % campaign_id
# return self.fetch_all(query)
def get_error_emails(self, campaign_id):
fields = ('id', 'email_address', 'email_address', 'campaign_id', 'template_html', 'subject',
'status', 'list_segment_id', 'created_on')
where = ('campaign_id=%s and status=%s ', [campaign_id, 'ERROR'])
return self.getAll("ready_to_send_emails", fields, where)
# query = "Select * from ready_to_send_emails " \
# "where status='ERROR' and campaign_id=%s;" % campaign_id
# return self.fetch_all(query)
def get_send_emails(self, campaign_id):
fields = ('id', 'email_address', 'email_address', 'campaign_id', 'subject',
'status', 'list_segment_id', 'created_on')
where = ('campaign_id=%s and status=%s ', [campaign_id, 'SENT'])
return self.getAll("ready_to_send_emails", fields, where)
def get_ab_campaign_send_emails(self, campaign_id):
# query = "select ab_campaign_id, email_address from email_management_db.ready_to_send_emails " \
# "where ab_campaign_id = {} and status = %s".format(campaign_id, 'SENT')
# return self.query(query)
fields = ('id', 'email_address', 'email_address', 'ab_campaign_id', 'template_html', 'subject',
'status', 'list_segment_id', 'created_on')
where = ('ab_campaign_id=%s and status=%s ', [campaign_id, 'SENT'])
return self.getAll("ready_to_send_emails", fields, where)
# query = "Select * from ready_to_send_emails " \
# "where status='SENT' and campaign_id=%s;" % campaign_id
# return self.fetch_all(query)
def get_unsubscribe_emails(self, campaign_id):
fields = ('id', 'email_address')
where = ('campaign_id=%s and status=%s ', [campaign_id, 'UNSUBSCRIBE'])
return self.getAll("ready_to_send_emails", fields, where)
def get_ab_unsubscribe_emails(self, campaign_id):
fields = ('id', 'email_address')
where = ('ab_campaign_id=%s and status=%s ', [campaign_id, 'UNSUBSCRIBE'])
return self.getAll("ready_to_send_emails", fields, where)
def get_sent_emails_by_date(self, date):
fields = ('id', 'status')
where = ("(status='SENT' or status = 'ERROR' or "
"status = 'READY_TO_SEND' or status = 'QUEUED' or "
"status = 'UNSUBSCRIBE') and DATE(created_on) = %s", [date])
return self.getAll('ready_to_send_emails', fields, where)
def get_sent_emails_by_date_campaign_id(self, date, id):
fields = ('id', 'status')
where = ("(status='SENT' or status = 'ERROR' or "
"status = 'READY_TO_SEND' or status = 'QUEUED' or "
"status = 'UNSUBSCRIBE') and DATE(created_on) = %s and campaign_id = %s", [date, id])
return self.getAll('ready_to_send_emails', fields, where)
def get_ab_sent_emails_by_date_campaign_id(self, date, id):
fields = ('id', 'status')
where = ("(status='SENT' or status = 'ERROR' or "
"status = 'READY_TO_SEND' or status = 'QUEUED' or "
"status = 'UNSUBSCRIBE') and DATE(created_on) = %s and ab_campaign_id = %s", [date, id])
return self.getAll('ready_to_send_emails', fields, where)
def get_sent_emails_by_id(self, id):
fields = ('id', 'status')
where = ("(status='SENT' or status = 'ERROR' or "
"status = 'READY_TO_SEND' or status = 'QUEUED' or "
"status = 'UNSUBSCRIBE') and campaign_id = %s", [id])
return self.getAll('ready_to_send_emails', fields, where)
def get_ab_sent_emails_by_id(self, id):
fields = ('id', 'status')
where = ("(status='SENT' or status = 'ERROR' or "
"status = 'READY_TO_SEND' or status = 'QUEUED' or "
"status = 'UNSUBSCRIBE') and ab_campaign_id = %s", [id])
return self.getAll('ready_to_send_emails', fields, where)
def get_all_emails_by_id(self, campaign_id=None):
fields = ('id', 'status', )
where = ("(status = 'READY_TO_SEND' or status = 'SENT' or status = 'ERROR' "
"or status = 'UNSUBSCRIBE') and campaign_id = %s", [campaign_id])
return self.getAll('ready_to_send_emails', fields=fields, where=where)
def check_status_by_ab_campaign_id(self, ab_campaign_id):
query = "SELECT count(CASE WHEN status LIKE '%UNSUBSCRIBE%' THEN 1 END) AS unsubscribe " \
"FROM ready_to_send_emails where ab_campaign_id ={}".format(ab_campaign_id)
return self.query(query)
def update_status_to_pause(self, campaign_id, campaign_type, status):
data = {"status": status}
where = ("status = %s and campaign_type = %s "
"and campaign_id = %s", ['READY_TO_SEND', campaign_type, campaign_id])
self.update("ready_to_send_emails", data, where)
def update_ab_status_to_pause(self, campaign_id, campaign_type, status):
data = {"status": status}
where = ("status = %s and campaign_type = %s "
"and ab_campaign_id = %s", ['READY_TO_SEND', campaign_type, campaign_id])
self.update("ready_to_send_emails", data, where)
def update_status_to_resume(self, campaign_id, campaign_type, status):
data = {"status": status}
where = ("status = %s and campaign_type = %s "
"and campaign_id = %s", ['PAUSE', campaign_type, campaign_id])
self.update("ready_to_send_emails", data, where)
def update_ab_status_to_resume(self, campaign_id, campaign_type, status):
data = {"status": status}
where = ("status = %s and campaign_type = %s "
"and ab_campaign_id = %s", ['PAUSE', campaign_type, campaign_id])
self.update("ready_to_send_emails", data, where)
def get_ab_send_emails(self, campaign_id):
fields = ('id', 'email_address', 'email_address', 'campaign_id', 'template_html', 'subject',
'status', 'list_segment_id', 'created_on')
where = ('campaign_id=%s and status=%s ', [campaign_id, 'SENT'])
return self.getAll("ready_to_send_emails", fields, where)
def check_ready_to_send_data_by_campaign_id(self, campaign_id):
fields = {'id', 'campaign_id', 'email_address'}
where = ("status = %s and campaign_id = %s", ['PAUSE', campaign_id])
get_data = self.getAll('ready_to_send_emails', fields=fields, where=where)
if get_data:
return True
return False
def check_ready_to_send_data_by_ab_campaign_id(self, campaign_id):
fields = {'id', 'campaign_id', 'email_address'}
where = ("status = %s and ab_campaign_id = %s", ['PAUSE', campaign_id])
get_data = self.getAll('ready_to_send_emails', fields=fields, where=where)
if get_data:
return True
return False
def get_sent_email_by_date(self, date):
fields = ('email_address', 'subject')
return self.getAll('ready_to_send_emails', fields)
| 49.978142 | 115 | 0.624863 | 1,171 | 9,146 | 4.549957 | 0.063194 | 0.148273 | 0.084647 | 0.09253 | 0.864302 | 0.834459 | 0.820946 | 0.795608 | 0.736299 | 0.727102 | 0 | 0.000144 | 0.239558 | 9,146 | 182 | 116 | 50.252747 | 0.765924 | 0.121365 | 0 | 0.515873 | 0 | 0 | 0.353631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.198413 | false | 0 | 0.007937 | 0 | 0.380952 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
faae02d535ed69a49ed6af716d826a81a9a0403d | 12 | py | Python | VirClass/__init__.py | thecoparyew/Virus-classification-theano | 55c4a7b804fa65d14c2167a3bbbaa2cf1b4a3521 | [
"MIT"
] | null | null | null | VirClass/__init__.py | thecoparyew/Virus-classification-theano | 55c4a7b804fa65d14c2167a3bbbaa2cf1b4a3521 | [
"MIT"
] | 5 | 2016-12-08T17:51:59.000Z | 2017-02-23T11:18:32.000Z | VirClass/__init__.py | thecoparyew/Virus-classification-theano | 55c4a7b804fa65d14c2167a3bbbaa2cf1b4a3521 | [
"MIT"
] | null | null | null | """Todo."""
| 6 | 11 | 0.333333 | 1 | 12 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 12 | 1 | 12 | 12 | 0.363636 | 0.416667 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fab3454463b17a7434ca8409498c8bf4297a6474 | 164 | py | Python | python/645-Set-Mismatch.py | souradeepta/leetcode-practice | f20235c0e3846362a86443bc24339b337f43af04 | [
"MIT"
] | null | null | null | python/645-Set-Mismatch.py | souradeepta/leetcode-practice | f20235c0e3846362a86443bc24339b337f43af04 | [
"MIT"
] | null | null | null | python/645-Set-Mismatch.py | souradeepta/leetcode-practice | f20235c0e3846362a86443bc24339b337f43af04 | [
"MIT"
] | null | null | null | class Solution:
def findErrorNums(self, nums: List[int]) -> List[int]:
return [sum(nums) - sum(set(nums)), len(nums)*(len(nums)+1)//2 - sum(set(nums))]
| 41 | 88 | 0.609756 | 25 | 164 | 4 | 0.56 | 0.14 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014599 | 0.164634 | 164 | 3 | 89 | 54.666667 | 0.715328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4f00955bfc05595a0c16854ca27a7d918b2295ba | 21,391 | py | Python | pyramid_mongo_sessions/tests/test_factory.py | vkefallinos/pyramid_mongo_sessions | a2a5881dc43ddf2b062e6210978c5b605c0c2ff6 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | pyramid_mongo_sessions/tests/test_factory.py | vkefallinos/pyramid_mongo_sessions | a2a5881dc43ddf2b062e6210978c5b605c0c2ff6 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | pyramid_mongo_sessions/tests/test_factory.py | vkefallinos/pyramid_mongo_sessions | a2a5881dc43ddf2b062e6210978c5b605c0c2ff6 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | # -*- coding: utf-8 -*-
import unittest
from pyramid import testing
class TestRedisSessionFactory(unittest.TestCase):
def _makeOne(self, request, secret='secret', **kw):
from .. import RedisSessionFactory
return RedisSessionFactory(secret, **kw)(request)
def _assert_is_a_header_to_set_cookie(self, header_value):
# The negative assertion below is the least complicated option for
# asserting that a Set-Cookie header sets a cookie rather than deletes
# a cookie. This helper method is to help make that intention clearer
# in the tests.
self.assertNotIn('Max-Age=0', header_value)
def _get_session_id(self, request):
from ..compat import cPickle
from ..util import get_unique_session_id
redis = request.registry._redis_sessions
session_id = get_unique_session_id(redis, timeout=100,
serialize=cPickle.dumps)
return session_id
def _serialize(self, session_id, secret='secret'):
from pyramid.session import signed_serialize
return signed_serialize(session_id, secret)
def _set_session_cookie(self, request, session_id, cookie_name='session',
secret='secret'):
cookieval = self._serialize(session_id, secret=secret)
request.cookies[cookie_name] = cookieval
def _make_request(self):
from . import DummyRedis
request = testing.DummyRequest()
request.registry._redis_sessions = DummyRedis()
request.exception = None
return request
def test_ctor_no_cookie(self):
request = self._make_request()
session = self._makeOne(request)
session_dict = session.from_redis()['managed_dict']
self.assertDictEqual(session_dict, {})
self.assertIs(session.new, True)
def test_ctor_with_cookie_still_valid(self):
request = self._make_request()
session_id_in_cookie = self._get_session_id(request)
self._set_session_cookie(request=request,
session_id=session_id_in_cookie)
session = self._makeOne(request)
self.assertEqual(session.session_id, session_id_in_cookie)
self.assertIs(session.new, False)
def test_ctor_with_bad_cookie(self):
request = self._make_request()
session_id_in_cookie = self._get_session_id(request)
invalid_secret = 'aaaaaa'
self._set_session_cookie(request=request,
session_id=session_id_in_cookie,
secret=invalid_secret)
session = self._makeOne(request)
self.assertNotEqual(session.session_id, session_id_in_cookie)
self.assertIs(session.new, True)
def test_session_id_not_in_redis(self):
request = self._make_request()
session_id_in_cookie = self._get_session_id(request)
self._set_session_cookie(request=request,
session_id=session_id_in_cookie)
redis = request.registry._redis_sessions
redis.store = {} # clears keys in DummyRedis
session = self._makeOne(request)
self.assertNotEqual(session.session_id, session_id_in_cookie)
self.assertIs(session.new, True)
def test_factory_parameters_used_to_set_cookie(self):
import re
import webob
cookie_name = 'testcookie'
cookie_max_age = 300
cookie_path = '/path'
cookie_domain = 'example.com'
cookie_secure = True
cookie_httponly = False
secret = 'test secret'
request = self._make_request()
session = request.session = self._makeOne(
request,
cookie_name=cookie_name,
cookie_max_age=cookie_max_age,
cookie_path=cookie_path,
cookie_domain=cookie_domain,
cookie_secure=cookie_secure,
cookie_httponly=cookie_httponly,
secret=secret,
)
session['key'] = 'value'
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
# Make another response and .set_cookie() using the same values and
# settings to get the expected header to compare against
response_to_check_against = webob.Response()
response_to_check_against.set_cookie(
key=cookie_name,
value=self._serialize(session_id=request.session.session_id,
secret=secret),
max_age=cookie_max_age,
path=cookie_path,
domain=cookie_domain,
secure=cookie_secure,
httponly=cookie_httponly,
)
expected_header = response_to_check_against.headers.getall(
'Set-Cookie')[0]
remove_expires_attribute = lambda s: re.sub('Expires ?=[^;]*;', '', s,
flags=re.IGNORECASE)
self.assertEqual(remove_expires_attribute(set_cookie_headers[0]),
remove_expires_attribute(expected_header))
# We have to remove the Expires attributes from each header before the
# assert comparison, as we cannot rely on their values to be the same
# (one is generated after the other, and may have a slightly later
# Expires time). The Expires value does not matter to us as it is
# calculated from Max-Age.
def test_factory_parameters_used_to_delete_cookie(self):
import webob
cookie_name = 'testcookie'
cookie_path = '/path'
cookie_domain = 'example.com'
request = self._make_request()
self._set_session_cookie(request=request,
cookie_name=cookie_name,
session_id=self._get_session_id(request))
session = request.session = self._makeOne(
request,
cookie_name=cookie_name,
cookie_path=cookie_path,
cookie_domain=cookie_domain,
)
session.invalidate()
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
# Make another response and .delete_cookie() using the same values and
# settings to get the expected header to compare against
response_to_check_against = webob.Response()
response_to_check_against.delete_cookie(
key=cookie_name,
path=cookie_path,
domain=cookie_domain,
)
expected_header = response.headers.getall('Set-Cookie')[0]
self.assertEqual(set_cookie_headers[0], expected_header)
# The tests below with names beginning with test_new_session_ test cases
# where first access to request.session creates a new session, as in
# test_ctor_no_cookie, test_ctor_with_bad_cookie and
# test_session_id_not_in_redis.
def test_new_session_cookie_on_exception_true_no_exception(self):
# cookie_on_exception is True by default, no exception raised
import webob
request = self._make_request()
request.session = self._makeOne(request)
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self._assert_is_a_header_to_set_cookie(set_cookie_headers[0])
def test_new_session_cookie_on_exception_true_exception(self):
# cookie_on_exception is True by default, exception raised
import webob
request = self._make_request()
request.session = self._makeOne(request)
request.exception = Exception()
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self._assert_is_a_header_to_set_cookie(set_cookie_headers[0])
def test_new_session_cookie_on_exception_false_no_exception(self):
# cookie_on_exception is False, no exception raised
import webob
request = self._make_request()
request.session = self._makeOne(request, cookie_on_exception=False)
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self._assert_is_a_header_to_set_cookie(set_cookie_headers[0])
def test_new_session_cookie_on_exception_false_exception(self):
# cookie_on_exception is False, exception raised
import webob
request = self._make_request()
request.session = self._makeOne(request, cookie_on_exception=False)
request.exception = Exception()
response = webob.Response()
request.response_callbacks[0](request, response)
self.assertNotIn('Set-Cookie', response.headers)
def test_new_session_invalidate(self):
# new session -> invalidate()
import webob
request = self._make_request()
request.session = self._makeOne(request)
request.session.invalidate()
response = webob.Response()
request.response_callbacks[0](request, response)
self.assertNotIn('Set-Cookie', response.headers)
def test_new_session_session_after_invalidate_coe_True_no_exception(self):
# new session -> invalidate() -> new session
# cookie_on_exception is True by default, no exception raised
import webob
request = self._make_request()
session = request.session = self._makeOne(request)
session.invalidate()
session['key'] = 'value'
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self._assert_is_a_header_to_set_cookie(set_cookie_headers[0])
def test_new_session_session_after_invalidate_coe_True_exception(self):
# new session -> invalidate() -> new session
# cookie_on_exception is True by default, exception raised
import webob
request = self._make_request()
session = request.session = self._makeOne(request)
session.invalidate()
session['key'] = 'value'
request.exception = Exception()
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self._assert_is_a_header_to_set_cookie(set_cookie_headers[0])
def test_new_session_session_after_invalidate_coe_False_no_exception(self):
# new session -> invalidate() -> new session
# cookie_on_exception is False, no exception raised
import webob
request = self._make_request()
session = request.session = self._makeOne(request,
cookie_on_exception=False)
session.invalidate()
session['key'] = 'value'
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self._assert_is_a_header_to_set_cookie(set_cookie_headers[0])
def test_new_session_session_after_invalidate_coe_False_exception(self):
# new session -> invalidate() -> new session
# cookie_on_exception is False, exception raised
import webob
request = self._make_request()
session = request.session = self._makeOne(request,
cookie_on_exception=False)
session.invalidate()
session['key'] = 'value'
request.exception = Exception()
response = webob.Response()
request.response_callbacks[0](request, response)
self.assertNotIn('Set-Cookie', response.headers)
def test_new_session_multiple_invalidates(self):
# new session -> invalidate() -> new session -> invalidate()
# Invalidate more than once, no new session after last invalidate()
import webob
request = self._make_request()
session = request.session = self._makeOne(request)
session.invalidate()
session['key'] = 'value'
session.invalidate()
response = webob.Response()
request.response_callbacks[0](request, response)
self.assertNotIn('Set-Cookie', response.headers)
def test_new_session_multiple_invalidates_with_no_new_session_in_between(
self
):
# new session -> invalidate() -> invalidate()
# Invalidate more than once, no new session in between invalidate()s,
# no new session after last invalidate()
import webob
request = self._make_request()
session = request.session = self._makeOne(request)
session.invalidate()
session.invalidate()
response = webob.Response()
request.response_callbacks[0](request, response)
self.assertNotIn('Set-Cookie', response.headers)
# The tests below with names beginning with test_existing_session_ test
# cases where first access to request.session returns an existing session,
# as in test_ctor_with_cookie_still_valid.
def test_existing_session(self):
import webob
request = self._make_request()
self._set_session_cookie(
request=request,
session_id=self._get_session_id(request),
)
request.session = self._makeOne(request)
response = webob.Response()
request.response_callbacks[0](request, response)
self.assertNotIn('Set-Cookie', response.headers)
def test_existing_session_invalidate(self):
# existing session -> invalidate()
import webob
request = self._make_request()
self._set_session_cookie(request=request,
session_id=self._get_session_id(request))
request.session = self._makeOne(request)
request.session.invalidate()
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self.assertIn('Max-Age=0', set_cookie_headers[0])
def test_existing_session_session_after_invalidate_coe_True_no_exception(
self
):
# existing session -> invalidate() -> new session
# cookie_on_exception is True by default, no exception raised
import webob
request = self._make_request()
self._set_session_cookie(request=request,
session_id=self._get_session_id(request))
session = request.session = self._makeOne(request)
session.invalidate()
session['key'] = 'value'
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self._assert_is_a_header_to_set_cookie(set_cookie_headers[0])
def test_existing_session_session_after_invalidate_coe_True_exception(
self
):
# existing session -> invalidate() -> new session
# cookie_on_exception is True by default, exception raised
import webob
request = self._make_request()
self._set_session_cookie(request=request,
session_id=self._get_session_id(request))
session = request.session = self._makeOne(request)
session.invalidate()
session['key'] = 'value'
request.exception = Exception()
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self._assert_is_a_header_to_set_cookie(set_cookie_headers[0])
def test_existing_session_session_after_invalidate_coe_False_no_exception(
self
):
# existing session -> invalidate() -> new session
# cookie_on_exception is False, no exception raised
import webob
request = self._make_request()
self._set_session_cookie(request=request,
session_id=self._get_session_id(request))
session = request.session = self._makeOne(request,
cookie_on_exception=False)
session.invalidate()
session['key'] = 'value'
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self._assert_is_a_header_to_set_cookie(set_cookie_headers[0])
def test_existing_session_session_after_invalidate_coe_False_exception(
self
):
# existing session -> invalidate() -> new session
# cookie_on_exception is False, exception raised
import webob
request = self._make_request()
self._set_session_cookie(request=request,
session_id=self._get_session_id(request))
session = request.session = self._makeOne(request,
cookie_on_exception=False)
session.invalidate()
session['key'] = 'value'
request.exception = Exception()
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self.assertIn('Max-Age=0', set_cookie_headers[0])
# Cancel setting of cookie for new session, but still delete cookie for
# the earlier invalidate().
def test_existing_session_multiple_invalidates(self):
# existing session -> invalidate() -> new session -> invalidate()
# Invalidate more than once, no new session after last invalidate()
import webob
request = self._make_request()
self._set_session_cookie(request=request,
session_id=self._get_session_id(request))
session = request.session = self._makeOne(request)
session.invalidate()
session['key'] = 'value'
session.invalidate()
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self.assertIn('Max-Age=0', set_cookie_headers[0])
def test_existing_session_multiple_invalidates_no_new_session_in_between(
self
):
# existing session -> invalidate() -> invalidate()
# Invalidate more than once, no new session in between invalidate()s,
# no new session after last invalidate()
import webob
request = self._make_request()
self._set_session_cookie(request=request,
session_id=self._get_session_id(request))
session = request.session = self._makeOne(request)
session.invalidate()
session.invalidate()
response = webob.Response()
request.response_callbacks[0](request, response)
set_cookie_headers = response.headers.getall('Set-Cookie')
self.assertEqual(len(set_cookie_headers), 1)
self.assertIn('Max-Age=0', set_cookie_headers[0])
def test_instance_conforms(self):
from pyramid.interfaces import ISession
from zope.interface.verify import verifyObject
request = self._make_request()
inst = self._makeOne(request)
verifyObject(ISession, inst)
def test_adjusted_session_timeout_persists(self):
request = self._make_request()
inst = self._makeOne(request)
inst.adjust_timeout_for_session(555)
session_id = inst.session_id
cookieval = self._serialize(session_id)
request.cookies['session'] = cookieval
new_session = self._makeOne(request)
self.assertEqual(new_session.timeout, 555)
def test_client_callable(self):
from . import DummyRedis
request = self._make_request()
redis = DummyRedis()
client_callable = lambda req, **kw: redis
inst = self._makeOne(request, client_callable=client_callable)
self.assertEqual(inst.redis, redis)
def test_session_factory_from_settings(self):
from .. import session_factory_from_settings
request = self._make_request()
settings = {'redis.sessions.secret': 'secret',
'redis.sessions.timeout': '999'}
inst = session_factory_from_settings(settings)(request)
self.assertEqual(inst.timeout, 999)
| 43.744376 | 79 | 0.65766 | 2,403 | 21,391 | 5.542655 | 0.085726 | 0.05541 | 0.054058 | 0.047901 | 0.800961 | 0.7674 | 0.728884 | 0.713417 | 0.689842 | 0.678204 | 0 | 0.004854 | 0.258473 | 21,391 | 488 | 80 | 43.834016 | 0.834825 | 0.136833 | 0 | 0.709512 | 0 | 0 | 0.029554 | 0.002336 | 0 | 0 | 0 | 0 | 0.125964 | 1 | 0.089974 | false | 0 | 0.084833 | 0 | 0.187661 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
877e2e4e7eda725bee7b6ea4b0c76b87b02ebe48 | 33 | py | Python | server/tests/test_model.py | liaojiacan/dyanmic-host | 0b47d8fa5b596e3e3d82d75992a00a97a9d4f457 | [
"MIT"
] | 4 | 2018-02-11T09:53:22.000Z | 2022-03-06T06:35:41.000Z | server/tests/test_model.py | liaojiacan/dyanmic-host | 0b47d8fa5b596e3e3d82d75992a00a97a9d4f457 | [
"MIT"
] | null | null | null | server/tests/test_model.py | liaojiacan/dyanmic-host | 0b47d8fa5b596e3e3d82d75992a00a97a9d4f457 | [
"MIT"
] | 1 | 2020-12-11T07:03:38.000Z | 2020-12-11T07:03:38.000Z | from app import create_app, db
| 11 | 31 | 0.757576 | 6 | 33 | 4 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 33 | 2 | 32 | 16.5 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
877fc3bb422e0ceb6977a580551be2a7a80e953a | 541 | py | Python | operator.py | chae-heechan/Python_Study | eceac851401f3e052ae6a0eb3854b80e7958af05 | [
"MIT"
] | null | null | null | operator.py | chae-heechan/Python_Study | eceac851401f3e052ae6a0eb3854b80e7958af05 | [
"MIT"
] | null | null | null | operator.py | chae-heechan/Python_Study | eceac851401f3e052ae6a0eb3854b80e7958af05 | [
"MIT"
] | null | null | null | print(1+1) # 2
print(3-2) # 1
print(5*2) # 10
print(6/3) # 2
print(2**3) # 2^3 = 8
print(5%3) # 나머지 구하기 2
print(10%3) # 1
print(5//3) # 1
print(10//3) # 3
print(10 > 3) # True
print(4 >= 7) # False
print(10 < 3) # False
print(5 <= 5) # True
print(3 == 3) # True
print(4 == 2) # False
print(3 + 4 == 7) #True
print(1 != 3) # True
print(not(1 != 3)) # False
print((3 > 0) and (3 < 5)) # True
print((3 > 0) & (3 < 5)) # True
print((3 > 0) or (3 > 5)) # True
print((3 > ) | (3 > 5)) # True
print(5 > 4 > 3) # True
print(5 > 4 > 7) # False | 17.451613 | 33 | 0.504621 | 113 | 541 | 2.415929 | 0.159292 | 0.32967 | 0.18315 | 0.161172 | 0.186813 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0.181373 | 0.245841 | 541 | 31 | 34 | 17.451613 | 0.487745 | 0.205176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
87b72303fcadbfb19c2952fa3b1f9411dcb0b82e | 147 | py | Python | VersionDetermination/__init__.py | grobbles/verion-determination | 04600ff2c854b98849de12779e36b899cbff6679 | [
"MIT"
] | null | null | null | VersionDetermination/__init__.py | grobbles/verion-determination | 04600ff2c854b98849de12779e36b899cbff6679 | [
"MIT"
] | null | null | null | VersionDetermination/__init__.py | grobbles/verion-determination | 04600ff2c854b98849de12779e36b899cbff6679 | [
"MIT"
] | null | null | null | from VersionDetermination.Main import Main
from VersionDetermination.LastVersionDetector import *
from VersionDetermination.MergeDetector import *
| 36.75 | 54 | 0.884354 | 13 | 147 | 10 | 0.461538 | 0.553846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 147 | 3 | 55 | 49 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87c5944f22f33022be639429e38c03a81216865e | 70 | py | Python | folks/models.py | marinintim/folks | 2dce457c9d57da34626717667b942fa91f62385f | [
"MIT"
] | 4 | 2019-12-02T20:04:55.000Z | 2020-04-30T22:14:30.000Z | folks/models.py | marinintim/folks | 2dce457c9d57da34626717667b942fa91f62385f | [
"MIT"
] | null | null | null | folks/models.py | marinintim/folks | 2dce457c9d57da34626717667b942fa91f62385f | [
"MIT"
] | null | null | null | from models.user import User
from models.permission import Permission
| 23.333333 | 40 | 0.857143 | 10 | 70 | 6 | 0.5 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 70 | 2 | 41 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3562d6b325d9065d70745985684f5616dca913f5 | 1,193 | py | Python | python/tHome/sma/test/status.py | ZigmundRat/T-Home | 5dc8689f52d87dac890051e540b338b009293ced | [
"BSD-2-Clause"
] | 18 | 2016-04-17T19:39:28.000Z | 2020-11-19T06:55:20.000Z | python/tHome/sma/test/status.py | ZigmundRat/T-Home | 5dc8689f52d87dac890051e540b338b009293ced | [
"BSD-2-Clause"
] | 6 | 2016-10-31T13:53:45.000Z | 2019-03-20T20:47:03.000Z | python/tHome/sma/test/status.py | ZigmundRat/T-Home | 5dc8689f52d87dac890051e540b338b009293ced | [
"BSD-2-Clause"
] | 12 | 2016-10-31T12:29:08.000Z | 2021-12-28T12:18:28.000Z | import unittest
from FakeSocket import FakeSocket
import tHome as T
#===========================================================================
#===========================================================================
class TestStatus ( T.util.test.Case ) :
def test_status( self ):
reply = """
53 4D 41 00 00 04 02 A0 00 00
00 01 00 4E 00 10 60 65 13 90
7D 00 AB 94 40 3B 00 A0 F7 00
E0 27 06 72 00 00 00 00 00 00
08 80 01 02 80 51 00 00 00 00
00 00 00 00 01 48 21 08 82 22
AF 53 23 00 00 00 2F 01 00 00
33 01 00 01 C7 01 00 00 FE FF
FF 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00
"""
l = T.sma.Link( "fake", connect=False )
try:
l.socket = FakeSocket( T.util.hex.toBytes( reply ) )
o1 = l.status()
l.decode = False
buf, decoder = l.status()
o2 = decoder( buf )
finally:
l.socket = None
right = T.util.Data(
status = 'Ok',
)
print o1
for k in right.keys():
r = right[k]
self.eq( getattr( o1, k ), r, k )
self.eq( getattr( o2, k ), r, k )
#===========================================================================
| 26.511111 | 76 | 0.440067 | 182 | 1,193 | 2.879121 | 0.450549 | 0.267176 | 0.30916 | 0.335878 | 0.118321 | 0.118321 | 0.09542 | 0.09542 | 0.064886 | 0.064886 | 0 | 0.216249 | 0.298407 | 1,193 | 44 | 77 | 27.113636 | 0.409797 | 0.1886 | 0 | 0 | 0 | 0 | 0.311917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.088235 | null | null | 0.029412 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
356799b94c1bf7f17bf77a57fff43438539e4725 | 161 | py | Python | analyzer/utils/Command.py | FreekDS/git-ci-analyzer | 33e179ea2e569a9df3aefee40b96e5ff6d70da1f | [
"MIT"
] | 1 | 2022-01-16T16:18:59.000Z | 2022-01-16T16:18:59.000Z | analyzer/utils/Command.py | FreekDS/git-ci-analyzer | 33e179ea2e569a9df3aefee40b96e5ff6d70da1f | [
"MIT"
] | null | null | null | analyzer/utils/Command.py | FreekDS/git-ci-analyzer | 33e179ea2e569a9df3aefee40b96e5ff6d70da1f | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from typing import Any
class Command(ABC):
@abstractmethod
def execute(self, *args, **kwargs) -> Any:
pass
| 17.888889 | 46 | 0.677019 | 20 | 161 | 5.45 | 0.7 | 0.311927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229814 | 161 | 8 | 47 | 20.125 | 0.879032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
35e7011bdfafe2b99a97a7291cf15041f6f6ee1d | 11,855 | py | Python | tests/bigquery/test_bigquery.py | eulercamposbarros/gcloud-utils | 8db8c7fce1e6343783c9ef492fc6a25fa95cd8c0 | [
"Apache-2.0"
] | 15 | 2019-02-03T16:00:01.000Z | 2021-11-19T17:47:08.000Z | tests/bigquery/test_bigquery.py | alexandreyy/gcloud-utils | 7938ec520fd06fb22f9211b1ec6410707cf43eb5 | [
"Apache-2.0"
] | 32 | 2018-12-18T22:56:43.000Z | 2021-02-10T01:55:07.000Z | tests/bigquery/test_bigquery.py | alexandreyy/gcloud-utils | 7938ec520fd06fb22f9211b1ec6410707cf43eb5 | [
"Apache-2.0"
] | 29 | 2018-12-26T13:34:58.000Z | 2021-12-20T10:24:31.000Z | """Test Bigquery Module"""
import unittest
import os
from gcloud_utils.bigquery.bigquery import Bigquery
from gcloud_utils.bigquery.query_builder import QueryBuilder
from google.cloud import bigquery
from mock.mock import MagicMock, patch, call
from more_itertools.more import side_effect
from google.api_core.exceptions import NotFound
try:
import mock
except ImportError:
import unittest.mock as mock
class TestBigquery(unittest.TestCase):
"Test Bigquery module"
def test_is_using_base_contract(self):
self.assertEqual(bigquery, Bigquery._MODEL_CLIENT)
def test_make_query(self):
query = "select * from test"
client = mock.Mock()
bigquery = Bigquery(client)
bigquery.query(query)
client.query.assert_called_once_with(query=query)
def test_make_query_with_object(self):
query = QueryBuilder("select * from test")
job_mock = mock.Mock()
client_mock = mock.Mock(**{"query.return_value": job_mock})
bigquery = Bigquery(client_mock)
bigquery.query(query)
client_mock.query.assert_called_once_with(query=query.query)
job_mock.result.assert_called_once()
def test_make_query_to_table(self):
query = "select * from test"
client_mock = mock.Mock()
dataset_id = "test_dataset"
table_id = "test_table"
bigquery = Bigquery(client_mock)
bigquery.query_to_table(query, dataset_id, table_id)
client_mock.query.assert_called_once()
def test_make_query_to_table_with_job_config(self):
dataset_id = "test_dataset"
table_id = "test_table"
query = "select * from test"
job_config_mock = mock.Mock()
dataset_mock = mock.Mock(**{"table.return_value": table_id + "_name"})
client_mock = mock.Mock(**{"dataset.return_value": dataset_mock})
bigquery = Bigquery(client_mock)
bigquery.query_to_table(query, dataset_id, table_id, job_config=job_config_mock)
client_mock.query.assert_called_once_with(query=query, job_config=job_config_mock)
self.assertEqual(table_id + "_name", job_config_mock.destination)
self.assertEqual("WRITE_TRUNCATE", job_config_mock.write_disposition)
def test_make_query_to_table_with_write_disposition(self):
dataset_id = "test_dataset"
table_id = "test_table"
query = "select * from test"
write_disposition = "test"
job_config_mock = mock.Mock()
client_mock = mock.Mock()
bigquery = Bigquery(client_mock)
bigquery.query_to_table(query, dataset_id, table_id, job_config=job_config_mock, write_disposition=write_disposition)
client_mock.query.assert_called_once_with(query=query, job_config=job_config_mock)
self.assertEqual(write_disposition, job_config_mock.write_disposition)
def test_export_table_to_google_cloud(self):
dataset_id = "test_dataset"
table_id = "test_table"
bucket_name = "test_bucket"
bucket_filename = "test_filename"
client_mock = mock.Mock()
bigquery = Bigquery(client_mock)
bigquery.table_to_cloud_storage(dataset_id, table_id, bucket_name, bucket_filename)
client_mock.extract_table.assert_called_once()
def test_export_table_to_google_cloud_with_wrong_file_type(self):
dataset_id = "test_dataset"
table_id = "test_table"
bucket_name = "test_bucket"
bucket_filename = "test_filename"
client_mock = mock.Mock()
bigquery = Bigquery(client_mock)
with self.assertRaises(Exception) as context:
bigquery.table_to_cloud_storage(dataset_id, table_id, bucket_name, bucket_filename, export_format="no_exists")
client_mock.extract_table.assert_not_called()
def test_export_table_to_google_cloud_with_wrong_compression_type(self):
dataset_id = "test_dataset"
table_id = "test_table"
bucket_name = "test_bucket"
bucket_filename = "test_filename"
client_mock = mock.Mock()
bigquery = Bigquery(client_mock)
with self.assertRaises(Exception) as context:
bigquery.table_to_cloud_storage(dataset_id, table_id, bucket_name, bucket_filename, compression_format="no_exists")
client_mock.extract_table.assert_not_called()
def test_export_table_to_google_cloud_with_wrong_compression_type_and_file_type(self):
dataset_id = "test_dataset"
table_id = "test_table"
bucket_name = "test_bucket"
bucket_filename = "test_filename"
client_mock = mock.Mock()
bigquery = Bigquery(client_mock)
with self.assertRaises(Exception) as context:
bigquery.table_to_cloud_storage(dataset_id, table_id, bucket_name, bucket_filename, compression_format="no_exists", export_format="no_exists")
client_mock.extract_table.assert_not_called()
def test_export_table_to_google_cloud_with_job_config(self):
dataset_id = "test_dataset"
table_id = "test_table"
bucket_name = "test_bucket"
bucket_filename = "test_filename"
location = "test_US"
expected_destination = "gs://test_bucket/test_filename_*.csv.gz"
export_format = "csv"
compression_format = "gz"
job_config_mock = mock.Mock()
dataset_mock = mock.Mock(**{"table.return_value": table_id + "_name"})
client_mock = mock.Mock(**{"dataset.return_value": dataset_mock})
bigquery = Bigquery(client_mock)
bigquery.table_to_cloud_storage(
dataset_id, table_id, bucket_name, bucket_filename,
export_format=export_format, compression_format=compression_format,
job_config=job_config_mock, location=location
)
client_mock.extract_table.assert_called_once_with(
table_id + "_name",
expected_destination,
location=location,
job_config=job_config_mock
)
self.assertEqual(bigquery.COMPRESSION_FORMATS[compression_format], job_config_mock.compression)
self.assertEqual(bigquery.FILE_FORMATS[export_format], job_config_mock.destination_format)
def test_export_table_to_google_cloud_with_job_config_and_extra_params(self):
dataset_id = "test_dataset"
table_id = "test_table"
bucket_name = "test_bucket"
bucket_filename = "test_filename"
location = "test_US"
expected_destination = "gs://test_bucket/test_filename_*.json"
export_format = "json"
compression_format = None
xuxu = "test_xuxu"
job_config_mock = mock.Mock()
dataset_mock = mock.Mock(**{"table.return_value": table_id + "_name"})
client_mock = mock.Mock(**{"dataset.return_value": dataset_mock})
bigquery = Bigquery(client_mock)
bigquery.table_to_cloud_storage(
dataset_id, table_id, bucket_name, bucket_filename,
export_format=export_format, compression_format=compression_format,
job_config=job_config_mock, location=location, xuxuxu=xuxu
)
client_mock.extract_table.assert_called_once_with(
table_id + "_name",
expected_destination,
location=location,
job_config=job_config_mock,
xuxuxu=xuxu
)
self.assertEqual(bigquery.COMPRESSION_FORMATS[compression_format], job_config_mock.compression)
self.assertEqual(bigquery.FILE_FORMATS[export_format], job_config_mock.destination_format)
def test_import_table_from_google_cloud(self):
dataset_id = "test_dataset"
table_id = "test_table"
bucket_name = "test_bucket"
bucket_filename = "test_filename"
expected_source = "gs://test_bucket/test_filename"
expected_table = "test_dataset.test_table"
dataset_mock = mock.Mock(**{"table.return_value": mock.Mock(bigquery.Table)})
client_mock = mock.Mock(**{"dataset.return_value": mock.Mock(bigquery.Dataset)})
job_config_mock = mock.Mock()
bq = Bigquery(client_mock)
bq.cloud_storage_to_table(bucket_name, bucket_filename, dataset_id, table_id, job_config_mock)
client_mock.load_table_from_uri.assert_called_once_with(
expected_source,
client_mock.dataset().table(),
job_config=job_config_mock,
location='US'
)
def test_table_exists_same_project(self):
table = mock.Mock()
dataset = mock.Mock()
dataset.table = MagicMock(return_value=table)
client = mock.Mock()
client.get_table = MagicMock()
client.dataset = MagicMock(return_value=dataset)
bigquery = Bigquery(client)
with patch("gcloud_utils.bigquery.bigquery.bigquery") as original_bigquery:
original_bigquery.Client = MagicMock()
assert bigquery.table_exists(table_id="my_table", dataset_id="my_dataset")
assert original_bigquery.Client.call_args_list == []
assert client.get_table.call_args_list == [call(table)]
assert client.dataset.call_args_list == [call("my_dataset")]
assert dataset.table.call_args_list == [call("my_table")]
def test_table_exists_false_same_project(self):
table = mock.Mock()
dataset = mock.Mock()
dataset.table = MagicMock(return_value=table)
client = mock.Mock()
client.get_table = MagicMock(side_effect=NotFound("xxx"))
client.dataset = MagicMock(return_value=dataset)
bigquery = Bigquery(client)
with patch("gcloud_utils.bigquery.bigquery.bigquery") as original_bigquery:
original_bigquery.Client = MagicMock()
assert not bigquery.table_exists(table_id="my_table", dataset_id="my_dataset")
assert original_bigquery.Client.call_args_list == []
def test_table_exists_other_project(self):
table = mock.Mock()
dataset = mock.Mock()
dataset.table = MagicMock(return_value=table)
client = mock.Mock()
client.get_table = MagicMock()
client.dataset = MagicMock(return_value=dataset)
other_client = mock.Mock()
other_client.dataset = MagicMock(return_value=dataset)
bigquery = Bigquery(client)
with patch("gcloud_utils.bigquery.bigquery.bigquery") as original_bigquery:
original_bigquery.Client = MagicMock(return_value=other_client)
assert bigquery.table_exists(table_id="my_table", dataset_id="my_dataset", project_id="my_project")
assert original_bigquery.Client.call_args_list == [call("my_project")]
assert client.get_table.call_args_list == [call(table)]
assert client.dataset.call_args_list == []
assert other_client.dataset.call_args_list == [call("my_dataset")]
assert dataset.table.call_args_list == [call("my_table")]
def test_table_exists_false_other_project(self):
table = mock.Mock()
dataset = mock.Mock()
dataset.table = MagicMock(return_value=table)
client = mock.Mock()
client.get_table = MagicMock(side_effect=NotFound("xxx"))
client.dataset = MagicMock(return_value=dataset)
other_client = mock.Mock()
other_client.dataset = MagicMock(return_value=dataset)
bigquery = Bigquery(client)
with patch("gcloud_utils.bigquery.bigquery.bigquery") as original_bigquery:
original_bigquery.Client = MagicMock(return_value=other_client)
assert not bigquery.table_exists(table_id="my_table", dataset_id="my_dataset", project_id="my_project")
assert original_bigquery.Client.call_args_list == [call("my_project")]
assert other_client.dataset.call_args_list == [call("my_dataset")]
| 39.516667 | 154 | 0.688739 | 1,429 | 11,855 | 5.323303 | 0.074878 | 0.0631 | 0.037597 | 0.026029 | 0.854213 | 0.836729 | 0.825818 | 0.781911 | 0.767451 | 0.744183 | 0 | 0 | 0.220413 | 11,855 | 299 | 155 | 39.648829 | 0.82309 | 0.003458 | 0 | 0.642241 | 0 | 0 | 0.104404 | 0.024093 | 0 | 0 | 0 | 0 | 0.172414 | 1 | 0.073276 | false | 0 | 0.051724 | 0 | 0.12931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ea7f5a326069ff59028fd6e381514821572b0522 | 37 | py | Python | cv_datetime_utils/__init__.py | WildflowerSchools/wf-cv-datetime-utils | c47ad83b860d49ba84ec98cea36c8b29536be623 | [
"MIT"
] | null | null | null | cv_datetime_utils/__init__.py | WildflowerSchools/wf-cv-datetime-utils | c47ad83b860d49ba84ec98cea36c8b29536be623 | [
"MIT"
] | null | null | null | cv_datetime_utils/__init__.py | WildflowerSchools/wf-cv-datetime-utils | c47ad83b860d49ba84ec98cea36c8b29536be623 | [
"MIT"
] | 2 | 2019-12-06T19:45:55.000Z | 2019-12-11T22:37:05.000Z | from cv_datetime_utils.core import *
| 18.5 | 36 | 0.837838 | 6 | 37 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
575d92c8f2715e7b3dc96ffec21ac73cd5643cc4 | 123 | py | Python | src/backend/api/routes/__init__.py | jqhoogland/anki-squared | 518f8a393da5d55e10222bd11b585affdab6eab5 | [
"MIT"
] | 2 | 2021-02-17T13:42:29.000Z | 2021-11-15T11:37:09.000Z | src/backend/api/routes/__init__.py | jqhoogland/anki-squared | 518f8a393da5d55e10222bd11b585affdab6eab5 | [
"MIT"
] | null | null | null | src/backend/api/routes/__init__.py | jqhoogland/anki-squared | 518f8a393da5d55e10222bd11b585affdab6eab5 | [
"MIT"
] | null | null | null | from .resources import api_resources
from .queue import api_queue
from .notes import api_notes
from .decks import api_decks | 30.75 | 36 | 0.845528 | 20 | 123 | 5 | 0.35 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 123 | 4 | 37 | 30.75 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
578a66980a92807973e109e8d8965feef4d645ba | 149 | py | Python | watcher/predictor_module/__init__.py | framaz/eye_control | 2b4a15b95b4e1f2e9e8c7359416747fd4d26d4a9 | [
"MIT"
] | 2 | 2020-07-19T08:04:03.000Z | 2021-02-03T14:16:04.000Z | watcher/predictor_module/__init__.py | framaz/eye_control | 2b4a15b95b4e1f2e9e8c7359416747fd4d26d4a9 | [
"MIT"
] | 3 | 2020-01-31T11:15:06.000Z | 2022-03-25T19:10:47.000Z | watcher/predictor_module/__init__.py | framaz/eye_control | 2b4a15b95b4e1f2e9e8c7359416747fd4d26d4a9 | [
"MIT"
] | null | null | null | from .basic_predictor import BasicPredictor
from .gazeml_predictor import GazeMLPredictor
from .visual_debug_predictor import VisualDebugPredictor
| 24.833333 | 56 | 0.885906 | 16 | 149 | 8 | 0.625 | 0.351563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09396 | 149 | 5 | 57 | 29.8 | 0.948148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57a3db3dac1baf8b98cfcf532c3f1ec0c6bd35ab | 12,453 | py | Python | FeatureEngineeringPy_DataScience/demo171_randomimputation_titanic.py | mahnooranjum/Programming_DataScience | f7a4215d4615b3f8460c3a1944a585628cf6930d | [
"MIT"
] | null | null | null | FeatureEngineeringPy_DataScience/demo171_randomimputation_titanic.py | mahnooranjum/Programming_DataScience | f7a4215d4615b3f8460c3a1944a585628cf6930d | [
"MIT"
] | null | null | null | FeatureEngineeringPy_DataScience/demo171_randomimputation_titanic.py | mahnooranjum/Programming_DataScience | f7a4215d4615b3f8460c3a1944a585628cf6930d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Demo171_RandomImputation_Titanic.ipynb
## Imputation
- Replacing missing data with statistical estimates of the missing values is called imputation
- Imputation completes a dataset and removes missing values
- Replace by mean if the variable has a Normal distribution
- Replace by median if the variable has a Skewed distribution
- Iterative imputation computes the missing value using all the other values in the dataset
- **Random sampling takes a random value from available observations and uses that value to fill the NA**
### When to use it?
- When data are missing completely at random (MCAR)
### Pros
- Easy
- Completes the dataset and does not loose much information
- Preserves variable variance
### Cons
- Change of covariance wrt other variables
- Random in nature
### Key Takeaway
- We usually create a new variable for the missing data to capture the relations where data is not MCAR
- So imputation handles the MCAR ascpect, whereas the new variable captures all the other statistical relations
"""
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from google.colab import drive
drive.mount('/content/gdrive')
data = pd.read_csv("gdrive/My Drive/Colab Notebooks/FeatureEngineering/train.csv")
"""## Titanic"""
data = data[['Age', 'Fare','Survived']]
data.head()
data.isnull().mean()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data[['Age', 'Fare']], data['Survived'], test_size=0.2)
X_train.shape, X_test.shape
def impute(df, column, dft):
df_temp = df.copy()
df_temp[column] = df_temp[column].apply(lambda x: np.random.choice(dft[column].dropna().values) if np.isnan(x) else x)
return df_temp
sns.distplot(X_train['Age'])
X_train_0 = impute(X_train, 'Age', X_train)
X_test_0 = impute(X_test, 'Age', X_train)
X_train_0.shape
type(X_train_0)
X_train_0 = X_train_0.values
type(X_train_0)
X_test_0 = X_test_0.values
fig, ax = plt.subplots(1,2, figsize=(10,10))
sns.distplot(X_train['Age'], ax = ax[0], color='blue')
sns.distplot(X_train_0[:,0], ax = ax[1], color='red')
from sklearn.impute import SimpleImputer
obj = SimpleImputer(missing_values = np.nan, strategy= 'mean')
X_train_mean = obj.fit_transform(X_train)
X_test_mean = obj.transform(X_test)
fig, ax = plt.subplots(1,2, figsize=(10,10))
sns.distplot(X_train['Age'], ax = ax[0], color='blue')
sns.distplot(X_train_mean[:,0], ax = ax[1], color='red')
from sklearn.impute import SimpleImputer
obj = SimpleImputer(missing_values = np.nan, strategy= 'median')
X_train_median = obj.fit_transform(X_train)
X_test_median = obj.transform(X_test)
fig, ax = plt.subplots(1,2, figsize=(10,10))
sns.distplot(X_train['Age'], ax = ax[0], color='blue')
sns.distplot(X_train_median[:,0], ax = ax[1], color='red')
from sklearn.impute import SimpleImputer
obj = SimpleImputer(missing_values = np.nan, strategy= 'most_frequent')
X_train_mode = obj.fit_transform(X_train)
X_test_mode = obj.transform(X_test)
fig, ax = plt.subplots(1,2, figsize=(10,10))
sns.distplot(X_train['Age'], ax = ax[0], color='blue')
sns.distplot(X_train_mode[:,0], ax = ax[1], color='red')
print('Std original: ', X_train['Age'].std())
print('Std 0: ', X_train_0[:,0].std())
print('Std mean: ', X_train_mean[:,0].std())
print('Std median: ', X_train_median[:,0].std())
print('Std mode: ', X_train_mode[:,0].std())
"""### Model performance"""
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.linear_model import RidgeClassifierCV
classifier = RidgeClassifierCV()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.linear_model import RidgeClassifierCV
classifier = RidgeClassifierCV()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.svm import SVC
classifier = SVC()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.neural_network import MLPClassifier
classifier = MLPClassifier()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.svm import LinearSVC
classifier = LinearSVC()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.ensemble import GradientBoostingClassifier
classifier = GradientBoostingClassifier()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.linear_model import SGDClassifier
classifier = SGDClassifier()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.linear_model import Perceptron
classifier = Perceptron()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier()
classifier.fit(X_train_0,y_train)
y_pred = classifier.predict(X_test_0)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mean,y_train)
y_pred = classifier.predict(X_test_mean)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_median,y_train)
y_pred = classifier.predict(X_test_median)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
classifier.fit(X_train_mode,y_train)
y_pred = classifier.predict(X_test_mode)
y_pred = np.round(y_pred).flatten()
print(accuracy_score(y_test, y_pred))
| 30.29927 | 122 | 0.78728 | 2,125 | 12,453 | 4.299294 | 0.091765 | 0.113835 | 0.149409 | 0.108144 | 0.767404 | 0.754378 | 0.750547 | 0.74201 | 0.74201 | 0.74201 | 0 | 0.007382 | 0.086244 | 12,453 | 410 | 123 | 30.373171 | 0.7955 | 0.0852 | 0 | 0.778547 | 0 | 0 | 0.020545 | 0.003351 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00346 | false | 0 | 0.079585 | 0 | 0.086505 | 0.197232 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57b2f0cb8ec6230af5d8dcb523e81135e27508fd | 31,407 | py | Python | rtamt/parser/ltl/LtlLexer.py | sguysc/rtamt | a16db77b61028f774d81457ff22e666229a5432c | [
"BSD-3-Clause"
] | 24 | 2019-12-04T00:20:16.000Z | 2022-03-24T17:48:14.000Z | rtamt/parser/ltl/LtlLexer.py | sguysc/rtamt | a16db77b61028f774d81457ff22e666229a5432c | [
"BSD-3-Clause"
] | 142 | 2020-01-16T15:36:21.000Z | 2022-03-28T20:40:45.000Z | rtamt/parser/ltl/LtlLexer.py | sguysc/rtamt | a16db77b61028f774d81457ff22e666229a5432c | [
"BSD-3-Clause"
] | 17 | 2020-07-07T20:32:08.000Z | 2022-03-07T07:20:22.000Z | # Generated from LtlLexer.g4 by ANTLR 4.5.1
# encoding: utf-8
from __future__ import print_function
from antlr4 import *
from io import StringIO
def serializedATN():
with StringIO() as buf:
buf.write(u"\3\u0430\ud6d1\u8206\uad2d\u4417\uaef1\u8d80\uaadd\2")
buf.write(u"K\u02d4\b\1\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4")
buf.write(u"\7\t\7\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r")
buf.write(u"\t\r\4\16\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22")
buf.write(u"\4\23\t\23\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4")
buf.write(u"\30\t\30\4\31\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35")
buf.write(u"\t\35\4\36\t\36\4\37\t\37\4 \t \4!\t!\4\"\t\"\4#\t#\4")
buf.write(u"$\t$\4%\t%\4&\t&\4\'\t\'\4(\t(\4)\t)\4*\t*\4+\t+\4,\t")
buf.write(u",\4-\t-\4.\t.\4/\t/\4\60\t\60\4\61\t\61\4\62\t\62\4\63")
buf.write(u"\t\63\4\64\t\64\4\65\t\65\4\66\t\66\4\67\t\67\48\t8\4")
buf.write(u"9\t9\4:\t:\4;\t;\4<\t<\4=\t=\4>\t>\4?\t?\4@\t@\4A\tA")
buf.write(u"\4B\tB\4C\tC\4D\tD\4E\tE\4F\tF\4G\tG\4H\tH\4I\tI\4J\t")
buf.write(u"J\4K\tK\4L\tL\4M\tM\4N\tN\4O\tO\4P\tP\4Q\tQ\4R\tR\4S")
buf.write(u"\tS\4T\tT\4U\tU\4V\tV\4W\tW\4X\tX\4Y\tY\4Z\tZ\4[\t[\4")
buf.write(u"\\\t\\\4]\t]\4^\t^\4_\t_\4`\t`\4a\ta\4b\tb\4c\tc\4d\t")
buf.write(u"d\3\2\3\2\3\3\3\3\3\4\3\4\3\5\3\5\3\6\3\6\3\7\3\7\3\b")
buf.write(u"\3\b\3\t\3\t\3\n\3\n\3\13\3\13\3\f\3\f\3\r\3\r\3\16\3")
buf.write(u"\16\3\17\3\17\3\20\3\20\3\21\3\21\3\21\3\21\3\22\3\22")
buf.write(u"\3\22\3\22\3\22\3\23\3\23\3\23\3\23\3\24\3\24\3\24\3")
buf.write(u"\24\3\25\3\25\3\26\3\26\3\26\3\27\3\27\3\27\3\30\3\30")
buf.write(u"\3\30\3\31\3\31\3\31\3\32\3\32\3\32\3\32\3\32\3\32\3")
buf.write(u"\33\3\33\3\33\3\33\3\33\3\33\3\33\3\34\3\34\3\34\3\34")
buf.write(u"\3\34\3\34\3\35\3\35\3\35\3\35\3\35\3\35\3\35\3\36\3")
buf.write(u"\36\3\36\3\36\3\36\3\36\3\36\3\36\3\36\3\37\3\37\3\37")
buf.write(u"\3\37\3\37\3\37\3 \3 \3 \3 \3 \3!\3!\3!\3!\3!\3!\3\"")
buf.write(u"\3\"\3\"\3\"\3\"\3#\3#\3#\3#\3#\3#\3#\3#\3$\3$\3$\3$")
buf.write(u"\3%\3%\3%\3%\3%\3&\3&\3&\3&\3&\3&\3&\3&\3&\3&\3\'\3\'")
buf.write(u"\3\'\3\'\3\'\3\'\3\'\3\'\3\'\3\'\3\'\3\'\3\'\3\'\3(\3")
buf.write(u"(\3(\3(\3(\3)\3)\3)\3)\5)\u0172\n)\3*\3*\3*\5*\u0177")
buf.write(u"\n*\3+\3+\3+\3+\5+\u017d\n+\3,\3,\3,\3,\3,\3,\5,\u0185")
buf.write(u"\n,\3-\3-\3-\3-\3-\3-\3-\3-\3-\5-\u0190\n-\3.\3.\3.\3")
buf.write(u".\3/\3/\3/\3/\3/\3\60\3\60\3\60\3\60\3\60\3\61\3\61\3")
buf.write(u"\61\3\61\3\61\3\61\3\61\5\61\u01a7\n\61\3\62\3\62\3\62")
buf.write(u"\3\62\3\62\3\62\3\62\3\62\3\62\3\62\3\62\5\62\u01b4\n")
buf.write(u"\62\3\63\3\63\3\63\3\63\3\63\3\63\5\63\u01bc\n\63\3\64")
buf.write(u"\3\64\3\64\3\64\3\64\3\64\3\64\5\64\u01c5\n\64\3\65\3")
buf.write(u"\65\3\65\3\65\3\65\3\65\3\65\3\65\3\65\3\65\3\65\3\65")
buf.write(u"\3\65\5\65\u01d4\n\65\3\66\3\66\3\66\3\66\3\66\5\66\u01db")
buf.write(u"\n\66\3\67\3\67\3\67\3\67\3\67\3\67\5\67\u01e3\n\67\3")
buf.write(u"8\38\38\38\38\58\u01ea\n8\39\39\39\39\39\59\u01f1\n9")
buf.write(u"\3:\3:\3:\3;\3;\3;\3;\3<\3<\3<\3=\3=\3=\3>\3>\3?\3?\3")
buf.write(u"@\3@\3A\3A\5A\u0208\nA\3B\3B\3B\3B\3B\3B\3B\3B\5B\u0212")
buf.write(u"\nB\3C\3C\3C\3C\3C\3C\3C\3C\3C\3C\5C\u021e\nC\3D\3D\3")
buf.write(u"D\5D\u0223\nD\3E\3E\3E\5E\u0228\nE\3E\3E\3E\5E\u022d")
buf.write(u"\nE\5E\u022f\nE\3F\3F\5F\u0233\nF\3F\5F\u0236\nF\3G\3")
buf.write(u"G\5G\u023a\nG\3H\3H\3I\6I\u023f\nI\rI\16I\u0240\3J\3")
buf.write(u"J\5J\u0245\nJ\3K\6K\u0248\nK\rK\16K\u0249\3L\3L\3L\3")
buf.write(u"L\3M\3M\5M\u0252\nM\3M\5M\u0255\nM\3N\3N\3O\6O\u025a")
buf.write(u"\nO\rO\16O\u025b\3P\3P\5P\u0260\nP\3Q\3Q\3Q\3Q\3R\3R")
buf.write(u"\5R\u0268\nR\3R\5R\u026b\nR\3S\3S\3T\6T\u0270\nT\rT\16")
buf.write(u"T\u0271\3U\3U\5U\u0276\nU\3V\3V\3W\3W\3W\5W\u027d\nW")
buf.write(u"\3W\5W\u0280\nW\3W\3W\3W\5W\u0285\nW\3W\3W\3W\5W\u028a")
buf.write(u"\nW\3X\3X\3X\3Y\3Y\3Z\5Z\u0292\nZ\3Z\6Z\u0295\nZ\rZ\16")
buf.write(u"Z\u0296\3[\3[\3\\\3\\\7\\\u029d\n\\\f\\\16\\\u02a0\13")
buf.write(u"\\\3]\3]\5]\u02a4\n]\3^\3^\3^\5^\u02a9\n^\3_\3_\5_\u02ad")
buf.write(u"\n_\3`\3`\3a\3a\3a\3a\3b\6b\u02b6\nb\rb\16b\u02b7\3b")
buf.write(u"\3b\3c\3c\3c\3c\7c\u02c0\nc\fc\16c\u02c3\13c\3c\3c\3")
buf.write(u"c\3c\3c\3d\3d\3d\3d\7d\u02ce\nd\fd\16d\u02d1\13d\3d\3")
buf.write(u"d\3\u02c1\2e\3\3\5\4\7\5\t\6\13\7\r\b\17\t\21\n\23\13")
buf.write(u"\25\f\27\r\31\16\33\17\35\20\37\21!\22#\23%\24\'\25)")
buf.write(u"\26+\27-\30/\31\61\32\63\33\65\34\67\359\36;\37= ?!A")
buf.write(u"\"C#E$G%I&K\'M(O)Q*S+U,W-Y.[/]\60_\61a\62c\63e\64g\65")
buf.write(u"i\66k\67m8o9q:s;u<w=y>{?}@\177A\u0081B\u0083C\u0085D")
buf.write(u"\u0087E\u0089\2\u008b\2\u008d\2\u008f\2\u0091\2\u0093")
buf.write(u"\2\u0095\2\u0097\2\u0099\2\u009b\2\u009d\2\u009f\2\u00a1")
buf.write(u"\2\u00a3\2\u00a5\2\u00a7\2\u00a9\2\u00abF\u00ad\2\u00af")
buf.write(u"\2\u00b1\2\u00b3\2\u00b5\2\u00b7G\u00b9\2\u00bb\2\u00bd")
buf.write(u"\2\u00bf\2\u00c1H\u00c3I\u00c5J\u00c7K\3\2\r\3\2\63;")
buf.write(u"\4\2ZZzz\5\2\62;CHch\4\2DDdd\3\2\62\63\4\2GGgg\4\2--")
buf.write(u"//\4\2C\\c|\3\2\f\f\5\2\13\13\16\17\"\"\4\2\f\f\17\17")
buf.write(u"\u02ec\2\3\3\2\2\2\2\5\3\2\2\2\2\7\3\2\2\2\2\t\3\2\2")
buf.write(u"\2\2\13\3\2\2\2\2\r\3\2\2\2\2\17\3\2\2\2\2\21\3\2\2\2")
buf.write(u"\2\23\3\2\2\2\2\25\3\2\2\2\2\27\3\2\2\2\2\31\3\2\2\2")
buf.write(u"\2\33\3\2\2\2\2\35\3\2\2\2\2\37\3\2\2\2\2!\3\2\2\2\2")
buf.write(u"#\3\2\2\2\2%\3\2\2\2\2\'\3\2\2\2\2)\3\2\2\2\2+\3\2\2")
buf.write(u"\2\2-\3\2\2\2\2/\3\2\2\2\2\61\3\2\2\2\2\63\3\2\2\2\2")
buf.write(u"\65\3\2\2\2\2\67\3\2\2\2\29\3\2\2\2\2;\3\2\2\2\2=\3\2")
buf.write(u"\2\2\2?\3\2\2\2\2A\3\2\2\2\2C\3\2\2\2\2E\3\2\2\2\2G\3")
buf.write(u"\2\2\2\2I\3\2\2\2\2K\3\2\2\2\2M\3\2\2\2\2O\3\2\2\2\2")
buf.write(u"Q\3\2\2\2\2S\3\2\2\2\2U\3\2\2\2\2W\3\2\2\2\2Y\3\2\2\2")
buf.write(u"\2[\3\2\2\2\2]\3\2\2\2\2_\3\2\2\2\2a\3\2\2\2\2c\3\2\2")
buf.write(u"\2\2e\3\2\2\2\2g\3\2\2\2\2i\3\2\2\2\2k\3\2\2\2\2m\3\2")
buf.write(u"\2\2\2o\3\2\2\2\2q\3\2\2\2\2s\3\2\2\2\2u\3\2\2\2\2w\3")
buf.write(u"\2\2\2\2y\3\2\2\2\2{\3\2\2\2\2}\3\2\2\2\2\177\3\2\2\2")
buf.write(u"\2\u0081\3\2\2\2\2\u0083\3\2\2\2\2\u0085\3\2\2\2\2\u0087")
buf.write(u"\3\2\2\2\2\u00ab\3\2\2\2\2\u00b7\3\2\2\2\2\u00c1\3\2")
buf.write(u"\2\2\2\u00c3\3\2\2\2\2\u00c5\3\2\2\2\2\u00c7\3\2\2\2")
buf.write(u"\3\u00c9\3\2\2\2\5\u00cb\3\2\2\2\7\u00cd\3\2\2\2\t\u00cf")
buf.write(u"\3\2\2\2\13\u00d1\3\2\2\2\r\u00d3\3\2\2\2\17\u00d5\3")
buf.write(u"\2\2\2\21\u00d7\3\2\2\2\23\u00d9\3\2\2\2\25\u00db\3\2")
buf.write(u"\2\2\27\u00dd\3\2\2\2\31\u00df\3\2\2\2\33\u00e1\3\2\2")
buf.write(u"\2\35\u00e3\3\2\2\2\37\u00e5\3\2\2\2!\u00e7\3\2\2\2#")
buf.write(u"\u00eb\3\2\2\2%\u00f0\3\2\2\2\'\u00f4\3\2\2\2)\u00f8")
buf.write(u"\3\2\2\2+\u00fa\3\2\2\2-\u00fd\3\2\2\2/\u0100\3\2\2\2")
buf.write(u"\61\u0103\3\2\2\2\63\u0106\3\2\2\2\65\u010c\3\2\2\2\67")
buf.write(u"\u0113\3\2\2\29\u0119\3\2\2\2;\u0120\3\2\2\2=\u0129\3")
buf.write(u"\2\2\2?\u012f\3\2\2\2A\u0134\3\2\2\2C\u013a\3\2\2\2E")
buf.write(u"\u013f\3\2\2\2G\u0147\3\2\2\2I\u014b\3\2\2\2K\u0150\3")
buf.write(u"\2\2\2M\u015a\3\2\2\2O\u0168\3\2\2\2Q\u0171\3\2\2\2S")
buf.write(u"\u0176\3\2\2\2U\u017c\3\2\2\2W\u0184\3\2\2\2Y\u018f\3")
buf.write(u"\2\2\2[\u0191\3\2\2\2]\u0195\3\2\2\2_\u019a\3\2\2\2a")
buf.write(u"\u01a6\3\2\2\2c\u01b3\3\2\2\2e\u01bb\3\2\2\2g\u01c4\3")
buf.write(u"\2\2\2i\u01d3\3\2\2\2k\u01da\3\2\2\2m\u01e2\3\2\2\2o")
buf.write(u"\u01e9\3\2\2\2q\u01f0\3\2\2\2s\u01f2\3\2\2\2u\u01f5\3")
buf.write(u"\2\2\2w\u01f9\3\2\2\2y\u01fc\3\2\2\2{\u01ff\3\2\2\2}")
buf.write(u"\u0201\3\2\2\2\177\u0203\3\2\2\2\u0081\u0207\3\2\2\2")
buf.write(u"\u0083\u0211\3\2\2\2\u0085\u021d\3\2\2\2\u0087\u0222")
buf.write(u"\3\2\2\2\u0089\u022e\3\2\2\2\u008b\u0230\3\2\2\2\u008d")
buf.write(u"\u0239\3\2\2\2\u008f\u023b\3\2\2\2\u0091\u023e\3\2\2")
buf.write(u"\2\u0093\u0244\3\2\2\2\u0095\u0247\3\2\2\2\u0097\u024b")
buf.write(u"\3\2\2\2\u0099\u024f\3\2\2\2\u009b\u0256\3\2\2\2\u009d")
buf.write(u"\u0259\3\2\2\2\u009f\u025f\3\2\2\2\u00a1\u0261\3\2\2")
buf.write(u"\2\u00a3\u0265\3\2\2\2\u00a5\u026c\3\2\2\2\u00a7\u026f")
buf.write(u"\3\2\2\2\u00a9\u0275\3\2\2\2\u00ab\u0277\3\2\2\2\u00ad")
buf.write(u"\u0289\3\2\2\2\u00af\u028b\3\2\2\2\u00b1\u028e\3\2\2")
buf.write(u"\2\u00b3\u0291\3\2\2\2\u00b5\u0298\3\2\2\2\u00b7\u029a")
buf.write(u"\3\2\2\2\u00b9\u02a3\3\2\2\2\u00bb\u02a8\3\2\2\2\u00bd")
buf.write(u"\u02ac\3\2\2\2\u00bf\u02ae\3\2\2\2\u00c1\u02b0\3\2\2")
buf.write(u"\2\u00c3\u02b5\3\2\2\2\u00c5\u02bb\3\2\2\2\u00c7\u02c9")
buf.write(u"\3\2\2\2\u00c9\u00ca\7/\2\2\u00ca\4\3\2\2\2\u00cb\u00cc")
buf.write(u"\7-\2\2\u00cc\6\3\2\2\2\u00cd\u00ce\7,\2\2\u00ce\b\3")
buf.write(u"\2\2\2\u00cf\u00d0\7\61\2\2\u00d0\n\3\2\2\2\u00d1\u00d2")
buf.write(u"\7*\2\2\u00d2\f\3\2\2\2\u00d3\u00d4\7+\2\2\u00d4\16\3")
buf.write(u"\2\2\2\u00d5\u00d6\7}\2\2\u00d6\20\3\2\2\2\u00d7\u00d8")
buf.write(u"\7\177\2\2\u00d8\22\3\2\2\2\u00d9\u00da\7]\2\2\u00da")
buf.write(u"\24\3\2\2\2\u00db\u00dc\7_\2\2\u00dc\26\3\2\2\2\u00dd")
buf.write(u"\u00de\7=\2\2\u00de\30\3\2\2\2\u00df\u00e0\7<\2\2\u00e0")
buf.write(u"\32\3\2\2\2\u00e1\u00e2\7.\2\2\u00e2\34\3\2\2\2\u00e3")
buf.write(u"\u00e4\7\60\2\2\u00e4\36\3\2\2\2\u00e5\u00e6\7B\2\2\u00e6")
buf.write(u" \3\2\2\2\u00e7\u00e8\7c\2\2\u00e8\u00e9\7d\2\2\u00e9")
buf.write(u"\u00ea\7u\2\2\u00ea\"\3\2\2\2\u00eb\u00ec\7u\2\2\u00ec")
buf.write(u"\u00ed\7s\2\2\u00ed\u00ee\7t\2\2\u00ee\u00ef\7v\2\2\u00ef")
buf.write(u"$\3\2\2\2\u00f0\u00f1\7g\2\2\u00f1\u00f2\7z\2\2\u00f2")
buf.write(u"\u00f3\7r\2\2\u00f3&\3\2\2\2\u00f4\u00f5\7r\2\2\u00f5")
buf.write(u"\u00f6\7q\2\2\u00f6\u00f7\7y\2\2\u00f7(\3\2\2\2\u00f8")
buf.write(u"\u00f9\7u\2\2\u00f9*\3\2\2\2\u00fa\u00fb\7o\2\2\u00fb")
buf.write(u"\u00fc\7u\2\2\u00fc,\3\2\2\2\u00fd\u00fe\7w\2\2\u00fe")
buf.write(u"\u00ff\7u\2\2\u00ff.\3\2\2\2\u0100\u0101\7p\2\2\u0101")
buf.write(u"\u0102\7u\2\2\u0102\60\3\2\2\2\u0103\u0104\7r\2\2\u0104")
buf.write(u"\u0105\7u\2\2\u0105\62\3\2\2\2\u0106\u0107\7v\2\2\u0107")
buf.write(u"\u0108\7q\2\2\u0108\u0109\7r\2\2\u0109\u010a\7k\2\2\u010a")
buf.write(u"\u010b\7e\2\2\u010b\64\3\2\2\2\u010c\u010d\7k\2\2\u010d")
buf.write(u"\u010e\7o\2\2\u010e\u010f\7r\2\2\u010f\u0110\7q\2\2\u0110")
buf.write(u"\u0111\7t\2\2\u0111\u0112\7v\2\2\u0112\66\3\2\2\2\u0113")
buf.write(u"\u0114\7k\2\2\u0114\u0115\7p\2\2\u0115\u0116\7r\2\2\u0116")
buf.write(u"\u0117\7w\2\2\u0117\u0118\7v\2\2\u01188\3\2\2\2\u0119")
buf.write(u"\u011a\7q\2\2\u011a\u011b\7w\2\2\u011b\u011c\7v\2\2\u011c")
buf.write(u"\u011d\7r\2\2\u011d\u011e\7w\2\2\u011e\u011f\7v\2\2\u011f")
buf.write(u":\3\2\2\2\u0120\u0121\7k\2\2\u0121\u0122\7p\2\2\u0122")
buf.write(u"\u0123\7v\2\2\u0123\u0124\7g\2\2\u0124\u0125\7t\2\2\u0125")
buf.write(u"\u0126\7p\2\2\u0126\u0127\7c\2\2\u0127\u0128\7n\2\2\u0128")
buf.write(u"<\3\2\2\2\u0129\u012a\7e\2\2\u012a\u012b\7q\2\2\u012b")
buf.write(u"\u012c\7p\2\2\u012c\u012d\7u\2\2\u012d\u012e\7v\2\2\u012e")
buf.write(u">\3\2\2\2\u012f\u0130\7t\2\2\u0130\u0131\7g\2\2\u0131")
buf.write(u"\u0132\7c\2\2\u0132\u0133\7n\2\2\u0133@\3\2\2\2\u0134")
buf.write(u"\u0135\7h\2\2\u0135\u0136\7n\2\2\u0136\u0137\7q\2\2\u0137")
buf.write(u"\u0138\7c\2\2\u0138\u0139\7v\2\2\u0139B\3\2\2\2\u013a")
buf.write(u"\u013b\7n\2\2\u013b\u013c\7q\2\2\u013c\u013d\7p\2\2\u013d")
buf.write(u"\u013e\7i\2\2\u013eD\3\2\2\2\u013f\u0140\7e\2\2\u0140")
buf.write(u"\u0141\7q\2\2\u0141\u0142\7o\2\2\u0142\u0143\7r\2\2\u0143")
buf.write(u"\u0144\7n\2\2\u0144\u0145\7g\2\2\u0145\u0146\7z\2\2\u0146")
buf.write(u"F\3\2\2\2\u0147\u0148\7k\2\2\u0148\u0149\7p\2\2\u0149")
buf.write(u"\u014a\7v\2\2\u014aH\3\2\2\2\u014b\u014c\7d\2\2\u014c")
buf.write(u"\u014d\7q\2\2\u014d\u014e\7q\2\2\u014e\u014f\7n\2\2\u014f")
buf.write(u"J\3\2\2\2\u0150\u0151\7c\2\2\u0151\u0152\7u\2\2\u0152")
buf.write(u"\u0153\7u\2\2\u0153\u0154\7g\2\2\u0154\u0155\7t\2\2\u0155")
buf.write(u"\u0156\7v\2\2\u0156\u0157\7k\2\2\u0157\u0158\7q\2\2\u0158")
buf.write(u"\u0159\7p\2\2\u0159L\3\2\2\2\u015a\u015b\7u\2\2\u015b")
buf.write(u"\u015c\7r\2\2\u015c\u015d\7g\2\2\u015d\u015e\7e\2\2\u015e")
buf.write(u"\u015f\7k\2\2\u015f\u0160\7h\2\2\u0160\u0161\7k\2\2\u0161")
buf.write(u"\u0162\7e\2\2\u0162\u0163\7c\2\2\u0163\u0164\7v\2\2\u0164")
buf.write(u"\u0165\7k\2\2\u0165\u0166\7q\2\2\u0166\u0167\7p\2\2\u0167")
buf.write(u"N\3\2\2\2\u0168\u0169\7h\2\2\u0169\u016a\7t\2\2\u016a")
buf.write(u"\u016b\7q\2\2\u016b\u016c\7o\2\2\u016cP\3\2\2\2\u016d")
buf.write(u"\u016e\7p\2\2\u016e\u016f\7q\2\2\u016f\u0172\7v\2\2\u0170")
buf.write(u"\u0172\7#\2\2\u0171\u016d\3\2\2\2\u0171\u0170\3\2\2\2")
buf.write(u"\u0172R\3\2\2\2\u0173\u0174\7q\2\2\u0174\u0177\7t\2\2")
buf.write(u"\u0175\u0177\7~\2\2\u0176\u0173\3\2\2\2\u0176\u0175\3")
buf.write(u"\2\2\2\u0177T\3\2\2\2\u0178\u0179\7c\2\2\u0179\u017a")
buf.write(u"\7p\2\2\u017a\u017d\7f\2\2\u017b\u017d\7(\2\2\u017c\u0178")
buf.write(u"\3\2\2\2\u017c\u017b\3\2\2\2\u017dV\3\2\2\2\u017e\u017f")
buf.write(u"\7k\2\2\u017f\u0180\7h\2\2\u0180\u0185\7h\2\2\u0181\u0182")
buf.write(u"\7>\2\2\u0182\u0183\7/\2\2\u0183\u0185\7@\2\2\u0184\u017e")
buf.write(u"\3\2\2\2\u0184\u0181\3\2\2\2\u0185X\3\2\2\2\u0186\u0187")
buf.write(u"\7k\2\2\u0187\u0188\7o\2\2\u0188\u0189\7r\2\2\u0189\u018a")
buf.write(u"\7n\2\2\u018a\u018b\7k\2\2\u018b\u018c\7g\2\2\u018c\u0190")
buf.write(u"\7u\2\2\u018d\u018e\7/\2\2\u018e\u0190\7@\2\2\u018f\u0186")
buf.write(u"\3\2\2\2\u018f\u018d\3\2\2\2\u0190Z\3\2\2\2\u0191\u0192")
buf.write(u"\7z\2\2\u0192\u0193\7q\2\2\u0193\u0194\7t\2\2\u0194\\")
buf.write(u"\3\2\2\2\u0195\u0196\7t\2\2\u0196\u0197\7k\2\2\u0197")
buf.write(u"\u0198\7u\2\2\u0198\u0199\7g\2\2\u0199^\3\2\2\2\u019a")
buf.write(u"\u019b\7h\2\2\u019b\u019c\7c\2\2\u019c\u019d\7n\2\2\u019d")
buf.write(u"\u019e\7n\2\2\u019e`\3\2\2\2\u019f\u01a0\7c\2\2\u01a0")
buf.write(u"\u01a1\7n\2\2\u01a1\u01a2\7y\2\2\u01a2\u01a3\7c\2\2\u01a3")
buf.write(u"\u01a4\7{\2\2\u01a4\u01a7\7u\2\2\u01a5\u01a7\7I\2\2\u01a6")
buf.write(u"\u019f\3\2\2\2\u01a6\u01a5\3\2\2\2\u01a7b\3\2\2\2\u01a8")
buf.write(u"\u01a9\7g\2\2\u01a9\u01aa\7x\2\2\u01aa\u01ab\7g\2\2\u01ab")
buf.write(u"\u01ac\7p\2\2\u01ac\u01ad\7v\2\2\u01ad\u01ae\7w\2\2\u01ae")
buf.write(u"\u01af\7c\2\2\u01af\u01b0\7n\2\2\u01b0\u01b1\7n\2\2\u01b1")
buf.write(u"\u01b4\7{\2\2\u01b2\u01b4\7H\2\2\u01b3\u01a8\3\2\2\2")
buf.write(u"\u01b3\u01b2\3\2\2\2\u01b4d\3\2\2\2\u01b5\u01b6\7w\2")
buf.write(u"\2\u01b6\u01b7\7p\2\2\u01b7\u01b8\7v\2\2\u01b8\u01b9")
buf.write(u"\7k\2\2\u01b9\u01bc\7n\2\2\u01ba\u01bc\7W\2\2\u01bb\u01b5")
buf.write(u"\3\2\2\2\u01bb\u01ba\3\2\2\2\u01bcf\3\2\2\2\u01bd\u01be")
buf.write(u"\7w\2\2\u01be\u01bf\7p\2\2\u01bf\u01c0\7n\2\2\u01c0\u01c1")
buf.write(u"\7g\2\2\u01c1\u01c2\7u\2\2\u01c2\u01c5\7u\2\2\u01c3\u01c5")
buf.write(u"\7Y\2\2\u01c4\u01bd\3\2\2\2\u01c4\u01c3\3\2\2\2\u01c5")
buf.write(u"h\3\2\2\2\u01c6\u01c7\7j\2\2\u01c7\u01c8\7k\2\2\u01c8")
buf.write(u"\u01c9\7u\2\2\u01c9\u01ca\7v\2\2\u01ca\u01cb\7q\2\2\u01cb")
buf.write(u"\u01cc\7t\2\2\u01cc\u01cd\7k\2\2\u01cd\u01ce\7e\2\2\u01ce")
buf.write(u"\u01cf\7c\2\2\u01cf\u01d0\7n\2\2\u01d0\u01d1\7n\2\2\u01d1")
buf.write(u"\u01d4\7{\2\2\u01d2\u01d4\7J\2\2\u01d3\u01c6\3\2\2\2")
buf.write(u"\u01d3\u01d2\3\2\2\2\u01d4j\3\2\2\2\u01d5\u01d6\7q\2")
buf.write(u"\2\u01d6\u01d7\7p\2\2\u01d7\u01d8\7e\2\2\u01d8\u01db")
buf.write(u"\7g\2\2\u01d9\u01db\7Q\2\2\u01da\u01d5\3\2\2\2\u01da")
buf.write(u"\u01d9\3\2\2\2\u01dbl\3\2\2\2\u01dc\u01dd\7u\2\2\u01dd")
buf.write(u"\u01de\7k\2\2\u01de\u01df\7p\2\2\u01df\u01e0\7e\2\2\u01e0")
buf.write(u"\u01e3\7g\2\2\u01e1\u01e3\7U\2\2\u01e2\u01dc\3\2\2\2")
buf.write(u"\u01e2\u01e1\3\2\2\2\u01e3n\3\2\2\2\u01e4\u01e5\7p\2")
buf.write(u"\2\u01e5\u01e6\7g\2\2\u01e6\u01e7\7z\2\2\u01e7\u01ea")
buf.write(u"\7v\2\2\u01e8\u01ea\7Z\2\2\u01e9\u01e4\3\2\2\2\u01e9")
buf.write(u"\u01e8\3\2\2\2\u01eap\3\2\2\2\u01eb\u01ec\7r\2\2\u01ec")
buf.write(u"\u01ed\7t\2\2\u01ed\u01ee\7g\2\2\u01ee\u01f1\7x\2\2\u01ef")
buf.write(u"\u01f1\7[\2\2\u01f0\u01eb\3\2\2\2\u01f0\u01ef\3\2\2\2")
buf.write(u"\u01f1r\3\2\2\2\u01f2\u01f3\7?\2\2\u01f3\u01f4\7?\2\2")
buf.write(u"\u01f4t\3\2\2\2\u01f5\u01f6\7#\2\2\u01f6\u01f7\7?\2\2")
buf.write(u"\u01f7\u01f8\7?\2\2\u01f8v\3\2\2\2\u01f9\u01fa\7@\2\2")
buf.write(u"\u01fa\u01fb\7?\2\2\u01fbx\3\2\2\2\u01fc\u01fd\7>\2\2")
buf.write(u"\u01fd\u01fe\7?\2\2\u01fez\3\2\2\2\u01ff\u0200\7@\2\2")
buf.write(u"\u0200|\3\2\2\2\u0201\u0202\7>\2\2\u0202~\3\2\2\2\u0203")
buf.write(u"\u0204\7?\2\2\u0204\u0080\3\2\2\2\u0205\u0208\5\u0083")
buf.write(u"B\2\u0206\u0208\5\u0085C\2\u0207\u0205\3\2\2\2\u0207")
buf.write(u"\u0206\3\2\2\2\u0208\u0082\3\2\2\2\u0209\u020a\7v\2\2")
buf.write(u"\u020a\u020b\7t\2\2\u020b\u020c\7w\2\2\u020c\u0212\7")
buf.write(u"g\2\2\u020d\u020e\7V\2\2\u020e\u020f\7T\2\2\u020f\u0210")
buf.write(u"\7W\2\2\u0210\u0212\7G\2\2\u0211\u0209\3\2\2\2\u0211")
buf.write(u"\u020d\3\2\2\2\u0212\u0084\3\2\2\2\u0213\u0214\7h\2\2")
buf.write(u"\u0214\u0215\7c\2\2\u0215\u0216\7n\2\2\u0216\u0217\7")
buf.write(u"u\2\2\u0217\u021e\7g\2\2\u0218\u0219\7H\2\2\u0219\u021a")
buf.write(u"\7C\2\2\u021a\u021b\7N\2\2\u021b\u021c\7U\2\2\u021c\u021e")
buf.write(u"\7G\2\2\u021d\u0213\3\2\2\2\u021d\u0218\3\2\2\2\u021e")
buf.write(u"\u0086\3\2\2\2\u021f\u0223\5\u0089E\2\u0220\u0223\5\u0097")
buf.write(u"L\2\u0221\u0223\5\u00a1Q\2\u0222\u021f\3\2\2\2\u0222")
buf.write(u"\u0220\3\2\2\2\u0222\u0221\3\2\2\2\u0223\u0088\3\2\2")
buf.write(u"\2\u0224\u022f\7\62\2\2\u0225\u022c\5\u008fH\2\u0226")
buf.write(u"\u0228\5\u008bF\2\u0227\u0226\3\2\2\2\u0227\u0228\3\2")
buf.write(u"\2\2\u0228\u022d\3\2\2\2\u0229\u022a\5\u0095K\2\u022a")
buf.write(u"\u022b\5\u008bF\2\u022b\u022d\3\2\2\2\u022c\u0227\3\2")
buf.write(u"\2\2\u022c\u0229\3\2\2\2\u022d\u022f\3\2\2\2\u022e\u0224")
buf.write(u"\3\2\2\2\u022e\u0225\3\2\2\2\u022f\u008a\3\2\2\2\u0230")
buf.write(u"\u0235\5\u008dG\2\u0231\u0233\5\u0091I\2\u0232\u0231")
buf.write(u"\3\2\2\2\u0232\u0233\3\2\2\2\u0233\u0234\3\2\2\2\u0234")
buf.write(u"\u0236\5\u008dG\2\u0235\u0232\3\2\2\2\u0235\u0236\3\2")
buf.write(u"\2\2\u0236\u008c\3\2\2\2\u0237\u023a\7\62\2\2\u0238\u023a")
buf.write(u"\5\u008fH\2\u0239\u0237\3\2\2\2\u0239\u0238\3\2\2\2\u023a")
buf.write(u"\u008e\3\2\2\2\u023b\u023c\t\2\2\2\u023c\u0090\3\2\2")
buf.write(u"\2\u023d\u023f\5\u0093J\2\u023e\u023d\3\2\2\2\u023f\u0240")
buf.write(u"\3\2\2\2\u0240\u023e\3\2\2\2\u0240\u0241\3\2\2\2\u0241")
buf.write(u"\u0092\3\2\2\2\u0242\u0245\5\u008dG\2\u0243\u0245\7a")
buf.write(u"\2\2\u0244\u0242\3\2\2\2\u0244\u0243\3\2\2\2\u0245\u0094")
buf.write(u"\3\2\2\2\u0246\u0248\7a\2\2\u0247\u0246\3\2\2\2\u0248")
buf.write(u"\u0249\3\2\2\2\u0249\u0247\3\2\2\2\u0249\u024a\3\2\2")
buf.write(u"\2\u024a\u0096\3\2\2\2\u024b\u024c\7\62\2\2\u024c\u024d")
buf.write(u"\t\3\2\2\u024d\u024e\5\u0099M\2\u024e\u0098\3\2\2\2\u024f")
buf.write(u"\u0254\5\u009bN\2\u0250\u0252\5\u009dO\2\u0251\u0250")
buf.write(u"\3\2\2\2\u0251\u0252\3\2\2\2\u0252\u0253\3\2\2\2\u0253")
buf.write(u"\u0255\5\u009bN\2\u0254\u0251\3\2\2\2\u0254\u0255\3\2")
buf.write(u"\2\2\u0255\u009a\3\2\2\2\u0256\u0257\t\4\2\2\u0257\u009c")
buf.write(u"\3\2\2\2\u0258\u025a\5\u009fP\2\u0259\u0258\3\2\2\2\u025a")
buf.write(u"\u025b\3\2\2\2\u025b\u0259\3\2\2\2\u025b\u025c\3\2\2")
buf.write(u"\2\u025c\u009e\3\2\2\2\u025d\u0260\5\u009bN\2\u025e\u0260")
buf.write(u"\7a\2\2\u025f\u025d\3\2\2\2\u025f\u025e\3\2\2\2\u0260")
buf.write(u"\u00a0\3\2\2\2\u0261\u0262\7\62\2\2\u0262\u0263\t\5\2")
buf.write(u"\2\u0263\u0264\5\u00a3R\2\u0264\u00a2\3\2\2\2\u0265\u026a")
buf.write(u"\5\u00a5S\2\u0266\u0268\5\u00a7T\2\u0267\u0266\3\2\2")
buf.write(u"\2\u0267\u0268\3\2\2\2\u0268\u0269\3\2\2\2\u0269\u026b")
buf.write(u"\5\u00a5S\2\u026a\u0267\3\2\2\2\u026a\u026b\3\2\2\2\u026b")
buf.write(u"\u00a4\3\2\2\2\u026c\u026d\t\6\2\2\u026d\u00a6\3\2\2")
buf.write(u"\2\u026e\u0270\5\u00a9U\2\u026f\u026e\3\2\2\2\u0270\u0271")
buf.write(u"\3\2\2\2\u0271\u026f\3\2\2\2\u0271\u0272\3\2\2\2\u0272")
buf.write(u"\u00a8\3\2\2\2\u0273\u0276\5\u00a5S\2\u0274\u0276\7a")
buf.write(u"\2\2\u0275\u0273\3\2\2\2\u0275\u0274\3\2\2\2\u0276\u00aa")
buf.write(u"\3\2\2\2\u0277\u0278\5\u00adW\2\u0278\u00ac\3\2\2\2\u0279")
buf.write(u"\u027a\5\u008bF\2\u027a\u027c\7\60\2\2\u027b\u027d\5")
buf.write(u"\u008bF\2\u027c\u027b\3\2\2\2\u027c\u027d\3\2\2\2\u027d")
buf.write(u"\u027f\3\2\2\2\u027e\u0280\5\u00afX\2\u027f\u027e\3\2")
buf.write(u"\2\2\u027f\u0280\3\2\2\2\u0280\u028a\3\2\2\2\u0281\u0282")
buf.write(u"\7\60\2\2\u0282\u0284\5\u008bF\2\u0283\u0285\5\u00af")
buf.write(u"X\2\u0284\u0283\3\2\2\2\u0284\u0285\3\2\2\2\u0285\u028a")
buf.write(u"\3\2\2\2\u0286\u0287\5\u008bF\2\u0287\u0288\5\u00afX")
buf.write(u"\2\u0288\u028a\3\2\2\2\u0289\u0279\3\2\2\2\u0289\u0281")
buf.write(u"\3\2\2\2\u0289\u0286\3\2\2\2\u028a\u00ae\3\2\2\2\u028b")
buf.write(u"\u028c\5\u00b1Y\2\u028c\u028d\5\u00b3Z\2\u028d\u00b0")
buf.write(u"\3\2\2\2\u028e\u028f\t\7\2\2\u028f\u00b2\3\2\2\2\u0290")
buf.write(u"\u0292\5\u00b5[\2\u0291\u0290\3\2\2\2\u0291\u0292\3\2")
buf.write(u"\2\2\u0292\u0294\3\2\2\2\u0293\u0295\5\u008dG\2\u0294")
buf.write(u"\u0293\3\2\2\2\u0295\u0296\3\2\2\2\u0296\u0294\3\2\2")
buf.write(u"\2\u0296\u0297\3\2\2\2\u0297\u00b4\3\2\2\2\u0298\u0299")
buf.write(u"\t\b\2\2\u0299\u00b6\3\2\2\2\u029a\u029e\5\u00b9]\2\u029b")
buf.write(u"\u029d\5\u00bb^\2\u029c\u029b\3\2\2\2\u029d\u02a0\3\2")
buf.write(u"\2\2\u029e\u029c\3\2\2\2\u029e\u029f\3\2\2\2\u029f\u00b8")
buf.write(u"\3\2\2\2\u02a0\u029e\3\2\2\2\u02a1\u02a4\5\u00bd_\2\u02a2")
buf.write(u"\u02a4\7&\2\2\u02a3\u02a1\3\2\2\2\u02a3\u02a2\3\2\2\2")
buf.write(u"\u02a4\u00ba\3\2\2\2\u02a5\u02a9\5\u00b9]\2\u02a6\u02a9")
buf.write(u"\5\u008dG\2\u02a7\u02a9\4\60\61\2\u02a8\u02a5\3\2\2\2")
buf.write(u"\u02a8\u02a6\3\2\2\2\u02a8\u02a7\3\2\2\2\u02a9\u00bc")
buf.write(u"\3\2\2\2\u02aa\u02ad\5\u00bf`\2\u02ab\u02ad\7a\2\2\u02ac")
buf.write(u"\u02aa\3\2\2\2\u02ac\u02ab\3\2\2\2\u02ad\u00be\3\2\2")
buf.write(u"\2\u02ae\u02af\t\t\2\2\u02af\u00c0\3\2\2\2\u02b0\u02b1")
buf.write(u"\t\n\2\2\u02b1\u02b2\3\2\2\2\u02b2\u02b3\ba\2\2\u02b3")
buf.write(u"\u00c2\3\2\2\2\u02b4\u02b6\t\13\2\2\u02b5\u02b4\3\2\2")
buf.write(u"\2\u02b6\u02b7\3\2\2\2\u02b7\u02b5\3\2\2\2\u02b7\u02b8")
buf.write(u"\3\2\2\2\u02b8\u02b9\3\2\2\2\u02b9\u02ba\bb\2\2\u02ba")
buf.write(u"\u00c4\3\2\2\2\u02bb\u02bc\7\61\2\2\u02bc\u02bd\7,\2")
buf.write(u"\2\u02bd\u02c1\3\2\2\2\u02be\u02c0\13\2\2\2\u02bf\u02be")
buf.write(u"\3\2\2\2\u02c0\u02c3\3\2\2\2\u02c1\u02c2\3\2\2\2\u02c1")
buf.write(u"\u02bf\3\2\2\2\u02c2\u02c4\3\2\2\2\u02c3\u02c1\3\2\2")
buf.write(u"\2\u02c4\u02c5\7,\2\2\u02c5\u02c6\7\61\2\2\u02c6\u02c7")
buf.write(u"\3\2\2\2\u02c7\u02c8\bc\2\2\u02c8\u00c6\3\2\2\2\u02c9")
buf.write(u"\u02ca\7\61\2\2\u02ca\u02cb\7\61\2\2\u02cb\u02cf\3\2")
buf.write(u"\2\2\u02cc\u02ce\n\f\2\2\u02cd\u02cc\3\2\2\2\u02ce\u02d1")
buf.write(u"\3\2\2\2\u02cf\u02cd\3\2\2\2\u02cf\u02d0\3\2\2\2\u02d0")
buf.write(u"\u02d2\3\2\2\2\u02d1\u02cf\3\2\2\2\u02d2\u02d3\bd\2\2")
buf.write(u"\u02d3\u00c8\3\2\2\2\63\2\u0171\u0176\u017c\u0184\u018f")
buf.write(u"\u01a6\u01b3\u01bb\u01c4\u01d3\u01da\u01e2\u01e9\u01f0")
buf.write(u"\u0207\u0211\u021d\u0222\u0227\u022c\u022e\u0232\u0235")
buf.write(u"\u0239\u0240\u0244\u0249\u0251\u0254\u025b\u025f\u0267")
buf.write(u"\u026a\u0271\u0275\u027c\u027f\u0284\u0289\u0291\u0296")
buf.write(u"\u029e\u02a3\u02a8\u02ac\u02b7\u02c1\u02cf\3\b\2\2")
return buf.getvalue()
class LtlLexer(Lexer):
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
MINUS = 1
PLUS = 2
TIMES = 3
DIVIDE = 4
LPAREN = 5
RPAREN = 6
LBRACE = 7
RBRACE = 8
LBRACK = 9
RBRACK = 10
SEMICOLON = 11
COLON = 12
COMMA = 13
DOT = 14
AT = 15
ABS = 16
SQRT = 17
EXP = 18
POW = 19
SEC = 20
MSEC = 21
USEC = 22
NSEC = 23
PSEC = 24
ROS_Topic = 25
Import = 26
Input = 27
Output = 28
Internal = 29
Constant = 30
DomainTypeReal = 31
DomainTypeFloat = 32
DomainTypeLong = 33
DomainTypeComplex = 34
DomainTypeInt = 35
DomainTypeBool = 36
Assertion = 37
Specification = 38
From = 39
NotOperator = 40
OrOperator = 41
AndOperator = 42
IffOperator = 43
ImpliesOperator = 44
XorOperator = 45
RiseOperator = 46
FallOperator = 47
AlwaysOperator = 48
EventuallyOperator = 49
UntilOperator = 50
UnlessOperator = 51
HistoricallyOperator = 52
OnceOperator = 53
SinceOperator = 54
NextOperator = 55
PreviousOperator = 56
EqualOperator = 57
NotEqualOperator = 58
GreaterOrEqualOperator = 59
LesserOrEqualOperator = 60
GreaterOperator = 61
LesserOperator = 62
EQUAL = 63
BooleanLiteral = 64
TRUE = 65
FALSE = 66
IntegerLiteral = 67
RealLiteral = 68
Identifier = 69
LINE_TERMINATOR = 70
WHITESPACE = 71
COMMENT = 72
LINE_COMMENT = 73
modeNames = [ u"DEFAULT_MODE" ]
literalNames = [ u"<INVALID>",
u"'-'", u"'+'", u"'*'", u"'/'", u"'('", u"')'", u"'{'", u"'}'",
u"'['", u"']'", u"';'", u"':'", u"','", u"'.'", u"'@'", u"'abs'",
u"'sqrt'", u"'exp'", u"'pow'", u"'s'", u"'ms'", u"'us'", u"'ns'",
u"'ps'", u"'topic'", u"'import'", u"'input'", u"'output'", u"'internal'",
u"'const'", u"'real'", u"'float'", u"'long'", u"'complex'",
u"'int'", u"'bool'", u"'assertion'", u"'specification'", u"'from'",
u"'xor'", u"'rise'", u"'fall'", u"'=='", u"'!=='", u"'>='",
u"'<='", u"'>'", u"'<'", u"'='" ]
symbolicNames = [ u"<INVALID>",
u"MINUS", u"PLUS", u"TIMES", u"DIVIDE", u"LPAREN", u"RPAREN",
u"LBRACE", u"RBRACE", u"LBRACK", u"RBRACK", u"SEMICOLON", u"COLON",
u"COMMA", u"DOT", u"AT", u"ABS", u"SQRT", u"EXP", u"POW", u"SEC",
u"MSEC", u"USEC", u"NSEC", u"PSEC", u"ROS_Topic", u"Import",
u"Input", u"Output", u"Internal", u"Constant", u"DomainTypeReal",
u"DomainTypeFloat", u"DomainTypeLong", u"DomainTypeComplex",
u"DomainTypeInt", u"DomainTypeBool", u"Assertion", u"Specification",
u"From", u"NotOperator", u"OrOperator", u"AndOperator", u"IffOperator",
u"ImpliesOperator", u"XorOperator", u"RiseOperator", u"FallOperator",
u"AlwaysOperator", u"EventuallyOperator", u"UntilOperator",
u"UnlessOperator", u"HistoricallyOperator", u"OnceOperator",
u"SinceOperator", u"NextOperator", u"PreviousOperator", u"EqualOperator",
u"NotEqualOperator", u"GreaterOrEqualOperator", u"LesserOrEqualOperator",
u"GreaterOperator", u"LesserOperator", u"EQUAL", u"BooleanLiteral",
u"TRUE", u"FALSE", u"IntegerLiteral", u"RealLiteral", u"Identifier",
u"LINE_TERMINATOR", u"WHITESPACE", u"COMMENT", u"LINE_COMMENT" ]
ruleNames = [ u"MINUS", u"PLUS", u"TIMES", u"DIVIDE", u"LPAREN", u"RPAREN",
u"LBRACE", u"RBRACE", u"LBRACK", u"RBRACK", u"SEMICOLON",
u"COLON", u"COMMA", u"DOT", u"AT", u"ABS", u"SQRT", u"EXP",
u"POW", u"SEC", u"MSEC", u"USEC", u"NSEC", u"PSEC", u"ROS_Topic",
u"Import", u"Input", u"Output", u"Internal", u"Constant",
u"DomainTypeReal", u"DomainTypeFloat", u"DomainTypeLong",
u"DomainTypeComplex", u"DomainTypeInt", u"DomainTypeBool",
u"Assertion", u"Specification", u"From", u"NotOperator",
u"OrOperator", u"AndOperator", u"IffOperator", u"ImpliesOperator",
u"XorOperator", u"RiseOperator", u"FallOperator", u"AlwaysOperator",
u"EventuallyOperator", u"UntilOperator", u"UnlessOperator",
u"HistoricallyOperator", u"OnceOperator", u"SinceOperator",
u"NextOperator", u"PreviousOperator", u"EqualOperator",
u"NotEqualOperator", u"GreaterOrEqualOperator", u"LesserOrEqualOperator",
u"GreaterOperator", u"LesserOperator", u"EQUAL", u"BooleanLiteral",
u"TRUE", u"FALSE", u"IntegerLiteral", u"DecimalNumeral",
u"Digits", u"Digit", u"NonZeroDigit", u"DigitsAndUnderscores",
u"DigitOrUnderscore", u"Underscores", u"HexNumeral", u"HexDigits",
u"HexDigit", u"HexDigitsAndUnderscores", u"HexDigitOrUnderscore",
u"BinaryNumeral", u"BinaryDigits", u"BinaryDigit", u"BinaryDigitsAndUnderscores",
u"BinaryDigitOrUnderscore", u"RealLiteral", u"DecimalRealLiteral",
u"ExponentPart", u"ExponentIndicator", u"SignedInteger",
u"Sign", u"Identifier", u"IdentifierStart", u"IdentifierPart",
u"LetterOrUnderscore", u"Letter", u"LINE_TERMINATOR",
u"WHITESPACE", u"COMMENT", u"LINE_COMMENT" ]
grammarFileName = u"LtlLexer.g4"
def __init__(self, input=None):
super(LtlLexer, self).__init__(input)
self.checkVersion("4.5.1")
self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
self._actions = None
self._predicates = None
| 64.226994 | 103 | 0.588563 | 6,995 | 31,407 | 2.637312 | 0.158828 | 0.118387 | 0.064072 | 0.074588 | 0.265178 | 0.216717 | 0.196607 | 0.135028 | 0.120989 | 0.117357 | 0 | 0.327926 | 0.148184 | 31,407 | 488 | 104 | 64.358607 | 0.361642 | 0.001815 | 0 | 0.008511 | 1 | 0.506383 | 0.626639 | 0.563216 | 0 | 0 | 0 | 0 | 0.008511 | 1 | 0.004255 | false | 0 | 0.014894 | 0 | 0.193617 | 0.002128 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57f98d30470e929b25ae7a8bddfb25b5640e9fbc | 201 | py | Python | qqai/__init__.py | clumsyme/qqai | e223dd6078a82506f17f620b741e171d0ea2456d | [
"MIT"
] | 71 | 2018-08-23T05:46:59.000Z | 2022-01-28T14:30:29.000Z | qqai/__init__.py | leon92101/qqai | e223dd6078a82506f17f620b741e171d0ea2456d | [
"MIT"
] | 2 | 2018-08-27T01:43:47.000Z | 2019-01-14T09:09:35.000Z | qqai/__init__.py | leon92101/qqai | e223dd6078a82506f17f620b741e171d0ea2456d | [
"MIT"
] | 17 | 2018-08-23T09:27:03.000Z | 2021-11-21T10:31:49.000Z | __all__ = ['nlp', 'aai', 'vision', 'Detectface', 'TextChat', 'ImgToText', 'NLPTrans']
import qqai.nlp
import qqai.aai
import qqai.vision
from qqai.qqai import Detectface, TextChat, ImgToText, NLPTrans | 33.5 | 85 | 0.741294 | 25 | 201 | 5.8 | 0.44 | 0.206897 | 0.372414 | 0.482759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109453 | 201 | 6 | 86 | 33.5 | 0.810056 | 0 | 0 | 0 | 0 | 0 | 0.232673 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
17c6824e3840765575de97a005f6c396399ac939 | 1,667 | py | Python | filetest/__init__.py | looking-for-a-job/filetest.py | 9fbe8c370d8fa73858fbc4964f0f641b93cdea0f | [
"Unlicense"
] | null | null | null | filetest/__init__.py | looking-for-a-job/filetest.py | 9fbe8c370d8fa73858fbc4964f0f641b93cdea0f | [
"Unlicense"
] | null | null | null | filetest/__init__.py | looking-for-a-job/filetest.py | 9fbe8c370d8fa73858fbc4964f0f641b93cdea0f | [
"Unlicense"
] | null | null | null | __all__ = ['d', 'e', 'f', 'nt', 'ot', 'r', 's', 'x', 'w']
import os
def d(path):
"""return True if path exists and is a directory, else False"""
return os.path.exists(path) and os.stat(path).st_size
def e(path):
"""return True if path exists, else False"""
return os.path.exists(path)
def f(path):
"""return True if file exists and is a regular file, else False"""
return os.path.exists(path) and os.path.isfile(path)
def nt(path1, path2):
"""return True if path1 is newer than path2, else False"""
t1 = os.path.getmtime(path1) if os.path.exists(path1) else None
t2 = os.path.getmtime(path2) if os.path.exists(path2) else None
return (t1 and t2 and t1 > t2) or (t1 and not t2)
def ot(path1, path2):
"""return True if path1 is older than path2, else False"""
t1 = os.path.getmtime(path1) if os.path.exists(path1) else None
t2 = os.path.getmtime(path2) if os.path.exists(path2) else None
return (t1 and t2 and t2 > t1) or (t2 and not t1)
def r(path):
"""return True if path exists and has read permission (for the current user), else False"""
return os.path.exists(path) and os.access(path, os.R_OK)
def s(path):
"""return True if path exists and is not zero size, else False"""
return os.path.exists(path) and os.stat(path).st_size
def x(path):
"""return True if path exists and has execute permission (for the current user), else False"""
return os.path.exists(path) and os.access(path, os.X_OK)
def w(path):
"""return True if path exists and has write permission (for the current user), else False"""
return os.path.exists(path) and os.access(path, os.W_OK)
| 30.87037 | 98 | 0.664667 | 293 | 1,667 | 3.750853 | 0.180887 | 0.154686 | 0.120109 | 0.101911 | 0.818926 | 0.818926 | 0.795268 | 0.714286 | 0.570519 | 0.537762 | 0 | 0.023952 | 0.19856 | 1,667 | 53 | 99 | 31.45283 | 0.798653 | 0.35093 | 0 | 0.25 | 0 | 0 | 0.010618 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.041667 | 0 | 0.791667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
aa19f21c298992d6f151fa39f133ffa7bccf8ff3 | 48,709 | py | Python | 4_extracting_mtf_tsmfe/pymfe/autocorr.py | FelSiq/ts-pymfe-tests | b11000d9745b7822f026b966d91255ecc7f77564 | [
"MIT"
] | null | null | null | 4_extracting_mtf_tsmfe/pymfe/autocorr.py | FelSiq/ts-pymfe-tests | b11000d9745b7822f026b966d91255ecc7f77564 | [
"MIT"
] | null | null | null | 4_extracting_mtf_tsmfe/pymfe/autocorr.py | FelSiq/ts-pymfe-tests | b11000d9745b7822f026b966d91255ecc7f77564 | [
"MIT"
] | null | null | null | """Module dedicated to autocorrelation time-series meta-features."""
import typing as t
import statsmodels.tsa.stattools
import numpy as np
import sklearn.gaussian_process
import pymfe._embed as _embed
import pymfe._utils as _utils
import pymfe._detrend as _detrend
try:
import pymfe.stat_tests as stat_tests
except ImportError:
pass
class MFETSAutocorr:
"""Extract time-series meta-features from Autocorr group."""
@classmethod
def precompute_detrended_acf(cls,
ts: np.ndarray,
nlags: t.Optional[int] = None,
unbiased: bool = True,
**kwargs) -> t.Dict[str, np.ndarray]:
"""Precompute the detrended autocorrelation function.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional
Number of lags to calculate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
kwargs:
Additional arguments and previous precomputed items. May
speed up this precomputation.
Returns
-------
dict
The following precomputed item is returned:
* ``detrended_acfs`` (:obj:`np.ndarray`): the autocorrelation
function from the detrended time-series.
"""
precomp_vals = {}
if "detrended_acfs" not in kwargs:
precomp_vals["detrended_acfs"] = cls.ft_acf_detrended(
ts=ts, nlags=nlags, unbiased=unbiased)
return precomp_vals
@classmethod
def precompute_gaussian_model(cls,
ts: np.ndarray,
random_state: t.Optional[int] = None,
**kwargs) -> t.Dict[str, t.Any]:
"""Precompute a gaussian process model.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
random_state : int, optional
Random seed to optimize the gaussian process model, to keep
the results reproducible.
kwargs:
Additional arguments and previous precomputed items. May
speed up this precomputation.
Returns
-------
dict
The following precomputed item is returned:
* ``gaussian_model`` (:obj:`GaussianProcessRegressor`):
Gaussian process fitted model.
* ``gaussian_resid`` (:obj:`np.ndarray`): Gaussian process
model residuals (diference from the original time-series).
The following item is necessary and, therefore, also precomputed
if necessary:
* ``ts_scaled`` (:obj:`np.ndarray`): standardized time-series
values (z-score).
"""
precomp_vals = {} # type: t.Dict[str, t.Any]
ts_scaled = kwargs.get("ts_scaled")
if ts_scaled is None:
precomp_vals["ts_scaled"] = _utils.standardize_ts(ts=ts)
ts_scaled = precomp_vals["ts_scaled"]
if "gaussian_model" not in kwargs:
gaussian_model = _utils.fit_gaussian_process(
ts=ts, ts_scaled=ts_scaled, random_state=random_state)
precomp_vals["gaussian_model"] = gaussian_model
gaussian_model = kwargs.get("gaussian_model",
precomp_vals["gaussian_model"])
if "gaussian_resid" not in kwargs:
gaussian_resid = _utils.fit_gaussian_process(
ts=ts,
ts_scaled=ts_scaled,
gaussian_model=gaussian_model,
return_residuals=True)
precomp_vals["gaussian_resid"] = gaussian_resid
return precomp_vals
@classmethod
def _calc_acf(cls,
ts: np.ndarray,
nlags: t.Optional[int] = None,
unbiased: bool = True,
detrend: bool = True,
detrended_acfs: t.Optional[np.ndarray] = None,
ts_detrended: t.Optional[np.ndarray] = None) -> np.ndarray:
"""Precompute the autocorrelation function.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional
Number of lags to calculate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
detrend : bool, optional (default=True)
If True, detrend the time-series using Friedman's Super Smoother
before calculating the autocorrelation function, or the user
given detrended time-series from ``ts_detrended`` argument.
detrended_acfs : :obj:`np.ndarray`, optional
This method's return value. Used to take advantage of
precomputations.
ts_detrended : :obj:`np.ndarray`, optional
Detrended time-series. Used only if `detrend` is False.
Returns
-------
:obj:`np.ndarray`
If `detrend` is True, the autocorrelation function up to `nlags`
lags of the detrended time-series. If `detrend` is False, the
autocorrelation function up to `nlags` lags of the time-series.
"""
if detrended_acfs is not None and (nlags is None
or detrended_acfs.size == nlags):
return detrended_acfs
if detrend and ts_detrended is None:
try:
ts_detrended = _detrend.decompose(ts=ts, ts_period=0)[2]
except ValueError:
pass
if ts_detrended is None:
ts_detrended = ts
if nlags is None:
nlags = ts.size // 2
acf = statsmodels.tsa.stattools.acf(ts_detrended,
nlags=nlags,
unbiased=unbiased,
fft=True)
return acf[1:]
@classmethod
def _calc_pacf(cls,
ts: np.ndarray,
nlags: t.Optional[int] = None,
method: str = "ols-unbiased",
detrend: bool = True,
ts_detrended: t.Optional[np.ndarray] = None) -> np.ndarray:
"""Precompute the partial autocorrelation function.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional
Number of lags to calculate the partial autocorrelation function.
method : str, optional (default="ols-unbiased")
Method used to estimate the partial autocorrelations. Check the
`statsmodels.tsa.stattools.pacf` documentation for the complete
list of the available methods.
detrend : bool, optional (default=True)
If True, detrend the time-series using Friedman's Super Smoother
before calculating the autocorrelation function, or the user
given detrended time-series from ``ts_detrended`` argument.
ts_detrended : :obj:`np.ndarray`, optional
Detrended time-series. Used only if `detrend` is False. If not
given, the time-series is detrended within this method using
Friedman's Super Smoother.
Returns
-------
:obj:`np.ndarray`
If `detrend` is True, the partial autocorrelation function up to
`nlags` lags of the detrended time-series. If `detrend` is False,
the autocorrelation function up to `nlags` lags of the time-series.
"""
if nlags is None:
nlags = 1 + ts.size // 10
if detrend and ts_detrended is None:
try:
ts_detrended = _detrend.decompose(ts=ts, ts_period=0)[2]
except ValueError:
pass
if ts_detrended is None:
ts_detrended = ts
pacf = statsmodels.tsa.stattools.pacf(ts_detrended,
nlags=nlags,
method=method)
return pacf[1:]
@classmethod
def _first_acf_below_threshold(
cls,
ts: np.ndarray,
threshold: float,
abs_acf_vals: bool = False,
max_nlags: t.Optional[int] = None,
unbiased: bool = True,
detrended_acfs: t.Optional[np.ndarray] = None,
) -> t.Union[int, float]:
"""First autocorrelation lag below a given threshold.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
threshold : float
The threshold to find the first lag below it.
abs_acf_vals : bool, optional (default=False)
If True, avaliate the aboslute value of the autocorrelation
function.
max_nlags : int, optional
Number of lags to avaluate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
detrended_acfs : :obj:`np.ndarray`, optional
This method's return value. Used to take advantage of
precomputations.
Returns
-------
int or float
Lag corresponding to the first autocorrelation function the
given ``threshold``, if any. Return `np.nan` if no such index is
found.
"""
detrended_acfs = cls._calc_acf(ts=ts,
nlags=max_nlags,
unbiased=unbiased,
detrended_acfs=detrended_acfs)
if abs_acf_vals:
# Note: in this case, we are testing if
# -threshold <= acf <= threshold.
detrended_acfs = np.abs(detrended_acfs)
nonpos_acfs = np.flatnonzero(detrended_acfs <= threshold)
try:
return nonpos_acfs[0] + 1
except IndexError:
return np.nan
@classmethod
def ft_acf(cls,
ts: np.ndarray,
nlags: t.Optional[int] = None,
unbiased: bool = True) -> np.ndarray:
"""Autocorrelation function of the time-series.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional
Number of lags to calculate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
Returns
-------
:obj:`np.ndarray`
The autocorrelation function up to `nlags` lags of the time-series.
"""
return cls._calc_acf(ts=ts,
nlags=nlags,
unbiased=unbiased,
detrend=False)
@classmethod
def ft_acf_detrended(
cls,
ts: np.ndarray,
nlags: t.Optional[int] = None,
unbiased: bool = True,
ts_detrended: t.Optional[np.ndarray] = None,
detrended_acfs: t.Optional[np.ndarray] = None) -> np.ndarray:
"""Autocorrelation function of the detrended time-series.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional
Number of lags to calculate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
ts_detrended : :obj:`np.ndarray`, optional
Detrended time-series. If not given, the time-series is detrended
within this method using Friedman's Super Smoother.
detrended_acfs : :obj:`np.ndarray`, optional
This method's return value. Used to take advantage of
precomputations.
Returns
-------
:obj:`np.ndarray`
The autocorrelation function up to `nlags` lags of the detrended
time-series.
"""
return cls._calc_acf(ts=ts,
nlags=nlags,
unbiased=unbiased,
detrend=True,
detrended_acfs=detrended_acfs,
ts_detrended=ts_detrended)
@classmethod
def ft_acf_diff(cls,
ts: np.ndarray,
num_diff: int = 1,
nlags: t.Optional[int] = None,
detrend: bool = True,
ts_detrended: t.Optional[np.ndarray] = None,
unbiased: bool = True) -> np.ndarray:
"""Autocorrelation function of the differenced time-series.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
num_diff : int, optional (default=1)
Order of differentiation.
nlags : int, optional
Number of lags to calculate the autocorrelation function.
detrend : bool, optional (default=True)
If True, detrend the time-series using Friedman's Super Smoother
before calculating the autocorrelation function, or the user
given detrended time-series from ``ts_detrended`` argument.
ts_detrended : :obj:`np.ndarray`, optional
Detrended time-series. If not given and ``detrend`` is True, the
time-series is detrended within this method using Friedman's Super
Smoother.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
Returns
-------
:obj:`np.ndarray`
The autocorrelation function up to `nlags` lags of the differenced
time-series.
"""
return cls._calc_acf(ts=np.diff(ts, n=num_diff),
detrend=detrend,
nlags=nlags,
unbiased=unbiased,
ts_detrended=ts_detrended)
@classmethod
def ft_pacf(cls,
ts: np.ndarray,
nlags: t.Optional[int] = None,
method: str = "ols-unbiased") -> np.ndarray:
"""Partial autocorrelation function of the time-series.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional
Number of lags to calculate the partial autocorrelation function.
method : str, optional (default="ols-unbiased")
Method used to estimate the partial autocorrelations. Check the
`statsmodels.tsa.stattools.pacf` documentation for the complete
list of the available methods.
Returns
-------
:obj:`np.ndarray`
The autocorrelation function up to `nlags` lags of the time-series.
"""
return cls._calc_pacf(ts=ts, nlags=nlags, method=method, detrend=False)
@classmethod
def ft_pacf_detrended(
cls,
ts: np.ndarray,
nlags: t.Optional[int] = None,
method: str = "ols-unbiased",
ts_detrended: t.Optional[np.ndarray] = None) -> np.ndarray:
"""Partial autocorrelation function of the detrended time-series.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional
Number of lags to calculate the partial autocorrelation function.
method : str, optional (default="ols-unbiased")
Method used to estimate the partial autocorrelations. Check the
`statsmodels.tsa.stattools.pacf` documentation for the complete
list of the available methods.
ts_detrended : :obj:`np.ndarray`, optional
Detrended time-series. If not given, the time-series is detrended
within this method using Friedman's Super Smoother.
Returns
-------
:obj:`np.ndarray`
The partial autocorrelation function up to `nlags` lags of the
detrended time-series.
"""
return cls._calc_pacf(ts=ts,
nlags=nlags,
method=method,
detrend=True,
ts_detrended=ts_detrended)
@classmethod
def ft_pacf_diff(
cls,
ts: np.ndarray,
num_diff: int = 1,
nlags: t.Optional[int] = None,
method: str = "ols-unbiased",
detrend: bool = True,
ts_detrended: t.Optional[np.ndarray] = None) -> np.ndarray:
"""Partial autocorrelation function of the differenced time-series.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional
Number of lags to calculate the partial autocorrelation function.
method : str, optional (default="ols-unbiased")
Method used to estimate the partial autocorrelations. Check the
`statsmodels.tsa.stattools.pacf` documentation for the complete
list of the available methods.
detrend : bool, optional (default=True)
If True, detrend the time-series using Friedman's Super Smoother
before calculating the autocorrelation function, or the user
given detrended time-series from ``ts_detrended`` argument.
ts_detrended : :obj:`np.ndarray`, optional
Detrended time-series. Used only if `detrend` is False. If not
given, the time-series is detrended within this method using
Friedman's Super Smoother.
Returns
-------
:obj:`np.ndarray`
If `detrend` is True, the partial autocorrelation function up to
`nlags` lags of the detrended time-series. If `detrend` is False,
the autocorrelation function up to `nlags` lags of the time-series.
"""
return cls._calc_pacf(ts=np.diff(ts, n=num_diff),
nlags=nlags,
method=method,
detrend=detrend,
ts_detrended=ts_detrended)
@classmethod
def ft_acf_first_nonsig(
cls,
ts: np.ndarray,
max_nlags: t.Optional[int] = None,
unbiased: bool = True,
threshold: t.Optional[t.Union[int, float]] = None,
detrended_acfs: t.Optional[np.ndarray] = None,
) -> t.Union[int, float]:
"""First non-significative detrended autocorrelation lag.
The critical value to determine if a autocorrelation is significative
is 1.96 / sqrt(len(ts)), but can be changed using the ``threshold``
parameter.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
max_nlags : int, optional
Number of lags to avaluate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
threshold : int or float, default
The critical value to determine if a autocorrelation value is
significative or not. This means that any autocorrelation with
absolute value higher than is considered significative. If None,
then the threshold used will be 1.96 / sqrt(len(ts)).
ts_detrended : :obj:`np.ndarray`, optional
Detrended time-series. Used only if `detrend` is False. If not
given, the time-series is detrended within this method using
Friedman's Super Smoother.
Returns
-------
int or float
Lag corresponding to the first autocorrelation with absolute value
below the given ``threshold``, if any. Return `np.nan` if no such
index is found.
"""
if threshold is None:
threshold = 1.96 / np.sqrt(ts.size)
res = cls._first_acf_below_threshold(ts=ts,
threshold=threshold,
abs_acf_vals=True,
max_nlags=max_nlags,
unbiased=unbiased,
detrended_acfs=detrended_acfs)
return res
@classmethod
def ft_acf_first_nonpos(
cls,
ts: np.ndarray,
max_nlags: t.Optional[int] = None,
unbiased: bool = True,
detrended_acfs: t.Optional[np.ndarray] = None,
) -> t.Union[int, float]:
"""First non-positive detrended autocorrelation lag.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
max_nlags : int, optional
Number of lags to avaluate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
detrended_acfs : :obj:`np.ndarray`, optional
Detrended time-series autocorrelation function with each index
corresponding to its lag starting from the lag 1.
Returns
-------
int or float
Lag corresponding to the first autocorrelation below or equal
zero, if any. Return `np.nan` if no such index is found.
"""
res = cls._first_acf_below_threshold(ts=ts,
threshold=0,
abs_acf_vals=False,
max_nlags=max_nlags,
unbiased=unbiased,
detrended_acfs=detrended_acfs)
return res
@classmethod
def ft_first_acf_locmin(
cls,
ts: np.ndarray,
max_nlags: t.Optional[int] = None,
unbiased: bool = True,
detrended_acfs: t.Optional[np.ndarray] = None,
) -> t.Union[int, float]:
"""First local minima detrended autocorrelation lag.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
max_nlags : int, optional
Number of lags to avaluate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
detrended_acfs : :obj:`np.ndarray`, optional
Detrended time-series autocorrelation function with each index
corresponding to its lag starting from the lag 1.
Returns
-------
int or float
Lag corresponding to the first autocorrelation below or equal
zero, if any. Return `np.nan` if no such index is found.
"""
detrended_acfs = cls._calc_acf(ts=ts,
nlags=max_nlags,
unbiased=unbiased,
detrended_acfs=detrended_acfs)
acfs_locmin = np.flatnonzero(
_utils.find_crit_pt(detrended_acfs, type_="min"))
try:
return acfs_locmin[0] + 1
except IndexError:
return np.nan
@classmethod
def ft_trev(cls,
ts: np.ndarray,
lag: t.Optional[t.Union[str, int]] = None,
only_numerator: bool = False,
max_nlags: t.Optional[int] = None,
detrended_acfs: t.Optional[np.ndarray] = None,
detrended_ami: t.Optional[np.ndarray] = None) -> float:
"""Normalized nonlinear autocorrelation Trev statistic.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
lag : int or str, optional
Lag to calculate the statistic. It must be a strictly positive
value, None or a string in {`acf`, `acf-nonsig`, `ami`}. In the
last two type of options, the lag is estimated within this method
using the given strategy method (or, if None, it is used the
strategy `acf-nonsig` by default) up to ``max_nlags``.
1. `acf`: the lag corresponds to the first non-positive value
in the autocorrelation function.
2. `acf-nonsig`: lag corresponds to the first non-significant
value in the autocorrelation function (absolute value below
the critical value of 1.96 / sqrt(ts.size)).
3. `ami`: lag corresponds to the first local minimum of the
time-series automutual information function.
only_numerator : bool, optional (default=False)
If True, return only the numerator from this statistic definition.
Check `autocorr.MFETSAutocorr.ft_trev` documentation for more
information.
max_nlags : int, optional
If ``lag`` is not a numeric value, than it will be estimated using
either the time-series autocorrelation or mutual information
function estimated up to this argument value.
detrended_acfs : :obj:`np.ndarray`, optional
Array of time-series autocorrelation function (for distinct ordered
lags) of the detrended time-series. Used only if ``lag`` is any of
`acf`, `acf-nonsig` or None. If this argument is not given and the
previous condiditon is meet, the autocorrelation function will be
calculated inside this method up to ``max_nlags``.
detrended_ami : :obj:`np.ndarray`, optional
Array of time-series automutual information function (for distinct
ordered lags). Used only if ``lag`` is `ami`. If not given and the
previous condiditon is meet, the automutual information function
will be calculated inside this method up to ``max_nlags``.
Returns
------
float
Trev statistic.
References
----------
.. [1] B.D. Fulcher and N.S. Jones, "hctsa: A Computational Framework
for Automated Time-Series Phenotyping Using Massive Feature
Extraction, Cell Systems 5: 527 (2017).
DOI: 10.1016/j.cels.2017.10.001
.. [2] B.D. Fulcher, M.A. Little, N.S. Jones, "Highly comparative
time-series analysis: the empirical structure of time series and
their methods", J. Roy. Soc. Interface 10(83) 20130048 (2013).
"""
_lag = _embed.embed_lag(ts=ts,
lag=lag,
max_nlags=max_nlags,
detrended_acfs=detrended_acfs,
detrended_ami=detrended_ami)
diff = ts[_lag:] - ts[:-_lag]
numen = np.mean(np.power(diff, 3))
if only_numerator:
return numen
denom = np.power(np.mean(np.square(diff)), 1.5)
trev = numen / denom
return trev
@classmethod
def ft_tc3(cls,
ts: np.ndarray,
lag: t.Optional[t.Union[str, int]] = None,
only_numerator: bool = False,
max_nlags: t.Optional[int] = None,
detrended_acfs: t.Optional[np.ndarray] = None,
detrended_ami: t.Optional[np.ndarray] = None) -> float:
"""Normalized nonlinear autocorrelation Tc3 statistic.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
lag : int or str, optional
Lag to calculate the statistic. It must be a strictly positive
value, None or a string in {`acf`, `acf-nonsig`, `ami`}. In the
last two type of options, the lag is estimated within this method
using the given strategy method (or, if None, it is used the
strategy `acf-nonsig` by default) up to ``max_nlags``.
1. `acf`: the lag corresponds to the first non-positive value
in the autocorrelation function.
2. `acf-nonsig`: lag corresponds to the first non-significant
value in the autocorrelation function (absolute value below
the critical value of 1.96 / sqrt(ts.size)).
3. `ami`: lag corresponds to the first local minimum of the
time-series automutual information function.
only_numerator : bool, optional (default=False)
If True, return only the numerator from this statistic definition.
Check `autocorr.MFETSAutocorr.ft_tc3` documentation for more
information.
max_nlags : int, optional
If ``lag`` is not a numeric value, than it will be estimated using
either the time-series autocorrelation or mutual information
function estimated up to this argument value.
detrended_acfs : :obj:`np.ndarray`, optional
Array of time-series autocorrelation function (for distinct ordered
lags) of the detrended time-series. Used only if ``lag`` is any of
`acf`, `acf-nonsig` or None. If this argument is not given and the
previous condiditon is meet, the autocorrelation function will be
calculated inside this method up to ``max_nlags``.
detrended_ami : :obj:`np.ndarray`, optional
Array of time-series automutual information function (for distinct
ordered lags). Used only if ``lag`` is `ami`. If not given and the
previous condiditon is meet, the automutual information function
will be calculated inside this method up to ``max_nlags``.
Returns
------
float
Tc3 statistic.
References
----------
.. [1] B.D. Fulcher and N.S. Jones, "hctsa: A Computational Framework
for Automated Time-Series Phenotyping Using Massive Feature
Extraction, Cell Systems 5: 527 (2017).
DOI: 10.1016/j.cels.2017.10.001
.. [2] B.D. Fulcher, M.A. Little, N.S. Jones, "Highly comparative
time-series analysis: the empirical structure of time series and
their methods", J. Roy. Soc. Interface 10(83) 20130048 (2013).
"""
_lag = _embed.embed_lag(ts=ts,
lag=lag,
max_nlags=max_nlags,
detrended_acfs=detrended_acfs,
detrended_ami=detrended_ami)
ts_shift_1 = ts[:-2 * _lag]
ts_shift_2 = ts[_lag:-_lag]
ts_shift_3 = ts[2 * _lag:]
_aux = ts_shift_1 * ts_shift_2
numen = np.mean(_aux * ts_shift_3)
if only_numerator:
return numen
denom = np.abs(np.mean(_aux))**1.5
tc3 = numen / denom
return tc3
@classmethod
def ft_gen_autocorr(cls,
ts: np.ndarray,
alpha: float = 1,
beta: float = 1,
lag: t.Optional[t.Union[str, int]] = None,
max_nlags: t.Optional[int] = None,
detrended_acfs: t.Optional[np.ndarray] = None,
detrended_ami: t.Optional[np.ndarray] = None) -> float:
"""Generalized autocorrelation of the time-series.
If alpha = beta, estimates how values of the same order of magnitude
are related in time. Otherwise, estimates correlations between
different magnitudes of the time series.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
alpha : float, optional (default=1)
Non-zero parameter.
beta : float, optional (default=1)
Non-zero parameter.
lag : int or str, optional
Lag to calculate the statistic. It must be a strictly positive
value, None or a string in {`acf`, `acf-nonsig`, `ami`}. In the
last two type of options, the lag is estimated within this method
using the given strategy method (or, if None, it is used the
strategy `acf-nonsig` by default) up to ``max_nlags``.
1. `acf`: the lag corresponds to the first non-positive value
in the autocorrelation function.
2. `acf-nonsig`: lag corresponds to the first non-significant
value in the autocorrelation function (absolute value below
the critical value of 1.96 / sqrt(ts.size)).
3. `ami`: lag corresponds to the first local minimum of the
time-series automutual information function.
max_nlags : int, optional
If ``lag`` is not a numeric value, than it will be estimated using
either the time-series autocorrelation or mutual information
function estimated up to this argument value.
detrended_acfs : :obj:`np.ndarray`, optional
Array of time-series autocorrelation function (for distinct ordered
lags) of the detrended time-series. Used only if ``lag`` is any of
`acf`, `acf-nonsig` or None. If this argument is not given and the
previous condiditon is meet, the autocorrelation function will be
calculated inside this method up to ``max_nlags``.
detrended_ami : :obj:`np.ndarray`, optional
Array of time-series automutual information function (for distinct
ordered lags). Used only if ``lag`` is `ami`. If not given and the
previous condiditon is meet, the automutual information function
will be calculated inside this method up to ``max_nlags``.
Returns
-------
float
Generalized autocorrelation of the time-series.
References
----------
.. [1] S.M. Duarte Queirós, L.G. Moyano, Yet on statistical properties
of traded volume: Correlation and mutual information at different
value magnitudes, Physica A: Statistical Mechanics and its
Applications, Volume 383, Issue 1, 2007, Pages 10-15, ISSN
0378-4371, https://doi.org/10.1016/j.physa.2007.04.082.
.. [2] B.D. Fulcher and N.S. Jones, "hctsa: A Computational Framework
for Automated Time-Series Phenotyping Using Massive Feature
Extraction, Cell Systems 5: 527 (2017).
DOI: 10.1016/j.cels.2017.10.001
.. [3] B.D. Fulcher, M.A. Little, N.S. Jones, "Highly comparative
time-series analysis: the empirical structure of time series and
their methods", J. Roy. Soc. Interface 10(83) 20130048 (2013).
DOI: 10.1098/rsif.2013.0048
"""
if np.isclose(alpha, 0.0):
raise ValueError("'alpha' parameter must be nonzero (got {})."
"".format(alpha))
if np.isclose(beta, 0.0):
raise ValueError("'beta' parameter must be nonzero (got {})."
"".format(beta))
_lag = _embed.embed_lag(ts=ts,
lag=lag,
max_nlags=max_nlags,
detrended_acfs=detrended_acfs,
detrended_ami=detrended_ami)
ts_abs = np.abs(ts)
ts_sft_1 = ts_abs[:-_lag]
ts_sft_2 = ts_abs[_lag:]
ts_sft_1_a = ts_sft_1**alpha
ts_sft_2_b = ts_sft_2**beta
ts_sft_1_a_mean = np.mean(ts_sft_1_a)
ts_sft_2_b_mean = np.mean(ts_sft_2_b)
gen_autocorr = (
np.mean(ts_sft_1_a * ts_sft_2_b) -
ts_sft_1_a_mean * ts_sft_2_b_mean /
(np.sqrt(np.mean(ts_sft_1**(2 * alpha)) - ts_sft_1_a_mean**2) *
np.sqrt(np.mean(ts_sft_2**(2 * beta)) - ts_sft_2_b_mean**2)))
return gen_autocorr
@classmethod
def ft_autocorr_crit_pt(
cls,
ts: np.ndarray,
crit_point_type: str = "non-plateau",
return_lags: bool = True,
max_nlags: t.Optional[int] = None,
unbiased: bool = True,
detrended_acfs: t.Optional[np.ndarray] = None) -> np.ndarray:
"""Lags corresponding to minima or maxima of autocorrelation function.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
crit_point_type : str, optional (default="non-plateau")
Critical point type. Must be a value in {`non-plateau`, `plateau`,
`min`, `max`, `any`}.
return_lags : bool, optional (default=True)
If True, return the lags corresponding to the autocorrelation
function critical points. If False, return a binary array marking
with `1` the positions corresponding to the critical points, and
`0` otherwise.
max_nlags : int, optional
Number of lags to avaluate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
detrended_acfs : :obj:`np.ndarray`, optional
Detrended time-series autocorrelation function with each index
corresponding to its lag starting from the lag 1.
Returns
-------
:obj:`np.ndarray`
If `return_lags` is True, return the lags corresponding to the
autocorrelation function critical points. If `return_lags` is
False, return a binary array marking with `1` the lag indices
(starting from lag 1) corresponding to the autocorrelation function
critical points.
References
----------
.. [1] B.D. Fulcher and N.S. Jones, "hctsa: A Computational Framework
for Automated Time-Series Phenotyping Using Massive Feature
Extraction, Cell Systems 5: 527 (2017).
DOI: 10.1016/j.cels.2017.10.001
.. [2] B.D. Fulcher, M.A. Little, N.S. Jones, "Highly comparative
time-series analysis: the empirical structure of time series and
their methods", J. Roy. Soc. Interface 10(83) 20130048 (2013).
DOI: 10.1098/rsif.2013.0048
"""
detrended_acfs = cls._calc_acf(ts=ts,
nlags=max_nlags,
unbiased=unbiased,
detrended_acfs=detrended_acfs)
ac_shape = _utils.find_crit_pt(arr=detrended_acfs,
type_=crit_point_type)
# Note: in 'hctsa', either the sum or the mean is returned.
# However, to enable summarization, here we return the whole
# array.
if return_lags:
return np.flatnonzero(ac_shape)
return ac_shape.astype(int)
@classmethod
def ft_gresid_autocorr(
cls,
ts: np.ndarray,
nlags: int = 8,
unbiased: bool = True,
random_state: t.Optional[int] = None,
ts_scaled: t.Optional[np.ndarray] = None,
gaussian_resid: t.Optional[np.ndarray] = None,
gaussian_model: t.Optional[
sklearn.gaussian_process.GaussianProcessRegressor] = None,
) -> np.ndarray:
"""Autocorrelation function of the gaussian process model residuals.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional (default=8)
Number of lags evaluated in the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
random_state : int, optional
Random seed to optimize the gaussian process model, to keep
the results reproducible.
ts_scaled : :obj:`np.ndarray`, optional
Standardized time-series values. Used to take advantage of
precomputations. Used only if ``gaussian_resid`` is None.
gaussian_resid : :obj:`np.ndarray`, optional
Residuals of a gaussian process. Used to take advantage of
precomputations.
gaussian_model : :obj:`GaussianProcessRegressor`, optional
A fitted model of a gaussian process. Used to take advantage of
precomputations. Used only if ``gaussian_resid`` is None.
Returns
-------
:obj:`np.ndarray`
Autocorrelation function of the gaussian process residuals up
to ``nlags``.
References
----------
.. [1] B.D. Fulcher and N.S. Jones, "hctsa: A Computational Framework
for Automated Time-Series Phenotyping Using Massive Feature
Extraction, Cell Systems 5: 527 (2017).
DOI: 10.1016/j.cels.2017.10.001
.. [2] B.D. Fulcher, M.A. Little, N.S. Jones, "Highly comparative
time-series analysis: the empirical structure of time series and
their methods", J. Roy. Soc. Interface 10(83) 20130048 (2013).
DOI: 10.1098/rsif.2013.0048
"""
if gaussian_resid is None:
gaussian_resid = _utils.fit_gaussian_process(
ts=ts,
ts_scaled=ts_scaled,
random_state=random_state,
gaussian_model=gaussian_model,
return_residuals=True)
gaussian_resid_acf = cls._calc_acf(ts=gaussian_resid,
nlags=nlags,
unbiased=unbiased)
return gaussian_resid_acf
@classmethod
def ft_gresid_lbtest(
cls,
ts: np.ndarray,
nlags: int = 8,
return_pval: bool = True,
random_state: t.Optional[int] = None,
ts_scaled: t.Optional[np.ndarray] = None,
gaussian_resid: t.Optional[np.ndarray] = None,
gaussian_model: t.Optional[
sklearn.gaussian_process.GaussianProcessRegressor] = None,
) -> np.ndarray:
"""Ljung–Box test in the residuals of a gaussian process model.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
nlags : int, optional (default=8)
Number of lags evaluated in the Ljung-Box test.
return_pval : bool, optional (default=True)
If True, return the p-value of the test instead of the test
statistic.
random_state : int, optional
Random seed to optimize the gaussian process model, to keep
the results reproducible.
ts_scaled : :obj:`np.ndarray`, optional
Standardized time-series values. Used to take advantage of
precomputations. Used only if ``gaussian_resid`` is None.
gaussian_resid : :obj:`np.ndarray`, optional
Residuals of a gaussian process. Used to take advantage of
precomputations.
gaussian_model : :obj:`GaussianProcessRegressor`, optional
A fitted model of a gaussian process. Used to take advantage of
precomputations. Used only if ``gaussian_resid`` is None.
Returns
-------
:obj:`np.ndarray`
If `return_pval` is False, Ljung-Box test statistic for each lag
of the gaussian process residuals.
If `return_pval` is True, p-value associated with the Ljung-Box
test statistic for each lag of the gaussian process residuals.
References
----------
.. [1] B.D. Fulcher and N.S. Jones, "hctsa: A Computational Framework
for Automated Time-Series Phenotyping Using Massive Feature
Extraction, Cell Systems 5: 527 (2017).
DOI: 10.1016/j.cels.2017.10.001
.. [2] B.D. Fulcher, M.A. Little, N.S. Jones, "Highly comparative
time-series analysis: the empirical structure of time series and
their methods", J. Roy. Soc. Interface 10(83) 20130048 (2013).
DOI: 10.1098/rsif.2013.0048
"""
if gaussian_resid is None:
gaussian_resid = _utils.fit_gaussian_process(
ts=ts,
ts_scaled=ts_scaled,
random_state=random_state,
gaussian_model=gaussian_model,
return_residuals=True)
gaussian_lb_test = stat_tests.MFETSStatTests.ft_test_lb(
ts_residuals=gaussian_resid,
max_nlags=nlags,
return_pval=return_pval)
return gaussian_lb_test
@classmethod
def ft_autocorr_out_dist(
cls,
ts: np.ndarray,
p: float = 0.8,
max_nlags: t.Optional[int] = None,
unbiased: bool = True,
detrended_acfs: t.Optional[np.ndarray] = None) -> np.ndarray:
"""Distance between the autocorrelation with and without outliers.
This method calculates the time-series autocorrelation function
for all observations, and the aucorrelation function of the
time-series without a subset of the most extreme values (cut at
the ``p`` quantile of all absolute values). It is returned the
absolute difference between these two autocorrelations.
Parameters
----------
ts : :obj:`np.ndarray`
One-dimensional time-series values.
p : float, optional (default=0.8)
Quantile of cut in the set of the time-series absolute values to
determine which instances are considered outliers.
max_nlags : int, optional
Number of lags to avaluate the autocorrelation function.
unbiased : bool, optional (default=True)
If True, the autocorrelation function is corrected for statistical
bias.
detrended_acfs : :obj:`np.ndarray`, optional
Detrended time-series autocorrelation function with each index
corresponding to its lag starting from the lag 1.
Returns
-------
:obj:`np.ndarray`
Absolute difference element-wise between each autocorrelation
with and without outliers.
References
----------
.. [1] B.D. Fulcher and N.S. Jones, "hctsa: A Computational Framework
for Automated Time-Series Phenotyping Using Massive Feature
Extraction, Cell Systems 5: 527 (2017).
DOI: 10.1016/j.cels.2017.10.001
.. [2] B.D. Fulcher, M.A. Little, N.S. Jones, "Highly comparative
time-series analysis: the empirical structure of time series and
their methods", J. Roy. Soc. Interface 10(83) 20130048 (2013).
DOI: 10.1098/rsif.2013.0048
"""
detrended_acfs = cls._calc_acf(ts=ts,
nlags=max_nlags,
unbiased=unbiased,
detrended_acfs=detrended_acfs)
ts_abs = np.abs(ts)
ts_inliners = ts[ts_abs <= np.quantile(ts_abs, p)]
ts_inliners_acfs = cls._calc_acf(ts=ts_inliners,
nlags=max_nlags,
unbiased=unbiased)
dist_acfs = np.abs(detrended_acfs[:ts_inliners_acfs.size] -
ts_inliners_acfs)
return dist_acfs
| 38.904952 | 79 | 0.565501 | 5,439 | 48,709 | 4.971318 | 0.075933 | 0.039277 | 0.026628 | 0.015977 | 0.827397 | 0.801731 | 0.789637 | 0.775657 | 0.756796 | 0.745257 | 0 | 0.016001 | 0.354616 | 48,709 | 1,251 | 80 | 38.936051 | 0.844096 | 0.549898 | 0 | 0.656489 | 0 | 0 | 0.016913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053435 | false | 0.007634 | 0.022901 | 0 | 0.147583 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa1ad38adc981aee0e6426cd068a6d7479353d17 | 153 | py | Python | uytrgfs.py | jatinchaudhary/python_dump | 5f7d63237fcb96a66bc3c4151599c7849f0f2735 | [
"bzip2-1.0.6"
] | null | null | null | uytrgfs.py | jatinchaudhary/python_dump | 5f7d63237fcb96a66bc3c4151599c7849f0f2735 | [
"bzip2-1.0.6"
] | null | null | null | uytrgfs.py | jatinchaudhary/python_dump | 5f7d63237fcb96a66bc3c4151599c7849f0f2735 | [
"bzip2-1.0.6"
] | null | null | null | class aaa:
a=0
b=0
c=0
def __init__(self,a):
self.a=a
def __init__(self,a,b):
self.a=a
self.b=b
| 12.75 | 28 | 0.424837 | 26 | 153 | 2.192308 | 0.346154 | 0.350877 | 0.385965 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.45098 | 153 | 11 | 29 | 13.909091 | 0.642857 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
a4b0fe3aae0e994ca806d9179d143ea0d478a8ff | 3,168 | py | Python | tools/db/tags/classifier/getData.py | rjawor/tagging | 8713c6835e2c1ddc8742d2954165c9f42d47f2a8 | [
"MIT"
] | null | null | null | tools/db/tags/classifier/getData.py | rjawor/tagging | 8713c6835e2c1ddc8742d2954165c9f42d47f2a8 | [
"MIT"
] | null | null | null | tools/db/tags/classifier/getData.py | rjawor/tagging | 8713c6835e2c1ddc8742d2954165c9f42d47f2a8 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
import MySQLdb as mdb
import sys
import re
from numpy import array_split
con = mdb.connect('localhost', 'webuser', 'tialof', 'taggingdb');
with con, open('/tmp/data.txt','w') as f:
print "Getting data"
cur = con.cursor(mdb.cursors.DictCursor)
# for Awadhi by kstronski only:
cur.execute("select words.sentence_id, words.id, (case words.split when 1 then concat(words.stem, words.suffix) else words.text end) as word_text, group_concat(word_annotation_type_choice_id order by word_annotation_type_choice_id asc) as tag from words inner join sentences on words.sentence_id = sentences.id inner join documents on sentences.document_id = documents.id and documents.language_id = 3 and documents.user_id = 3 left join word_annotations on words.id = word_annotations.word_id left join word_annotation_type_choices_word_annotations on word_annotations.id = word_annotation_type_choices_word_annotations.word_annotation_id group by word_id order by sentence_id, words.position")
# for Rajasthani only:
# cur.execute("select words.sentence_id, words.id, (case words.split when 1 then concat(words.stem, words.suffix) else words.text end) as word_text, group_concat(word_annotation_type_choice_id order by word_annotation_type_choice_id asc) as tag from words inner join sentences on words.sentence_id = sentences.id inner join documents on sentences.document_id = documents.id and documents.language_id = 2 left join word_annotations on words.id = word_annotations.word_id left join word_annotation_type_choices_word_annotations on word_annotations.id = word_annotation_type_choices_word_annotations.word_annotation_id group by word_id order by sentence_id, words.position")
# cur.execute("select words.sentence_id, words.id, (case words.split when 1 then concat(words.stem, words.suffix) else words.text end) as word_text, group_concat(word_annotation_type_choice_id order by word_annotation_type_choice_id asc) as tag from words left join word_annotations on words.id = word_annotations.word_id left join word_annotation_type_choices_word_annotations on word_annotations.id = word_annotation_type_choices_word_annotations.word_annotation_id group by word_id order by sentence_id, words.position")
lastSentId = -1
sentence = []
labels = []
for i in range(cur.rowcount):
row = cur.fetchone()
if row['word_text']:
if row['sentence_id'] <> lastSentId:
if lastSentId <> -1:
sentence_text = ' '.join([w[0]+'_'+('1' if w[1] else '0') for w in zip(sentence,labels)])+'\n'
f.write(sentence_text)
#print sentence_text
sentence = []
labels = []
lastSentId = row['sentence_id']
text = re.sub('\s+','',row['word_text']).replace('|','')
if len(text) > 0:
sentence.append(text)
label = False
if row['tag']:
label = '21' in row['tag'].split(',') or '85' in row['tag'].split(',')
labels.append(label)
print "Test data got"
| 58.666667 | 699 | 0.706124 | 459 | 3,168 | 4.649237 | 0.22658 | 0.098407 | 0.101218 | 0.067479 | 0.703374 | 0.703374 | 0.703374 | 0.703374 | 0.703374 | 0.703374 | 0 | 0.007081 | 0.197601 | 3,168 | 53 | 700 | 59.773585 | 0.832415 | 0.411301 | 0 | 0.133333 | 0 | 0.033333 | 0.438644 | 0.133477 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.133333 | null | null | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4c0244c788d3f533848d2f0e6df3cb4d6d4be45 | 3,149 | py | Python | protlearn/features/tests/test_binary.py | tadorfer/ProtClass | da1a01ea9abd3c367b3389dfed683c6a9dfa6afd | [
"MIT"
] | 24 | 2020-09-17T10:35:44.000Z | 2022-03-09T19:19:01.000Z | protlearn/features/tests/test_binary.py | tadorfer/ProtClass | da1a01ea9abd3c367b3389dfed683c6a9dfa6afd | [
"MIT"
] | 14 | 2020-08-09T18:23:01.000Z | 2020-11-19T05:48:14.000Z | protlearn/features/tests/test_binary.py | tadorfer/ProtClass | da1a01ea9abd3c367b3389dfed683c6a9dfa6afd | [
"MIT"
] | 3 | 2021-03-07T23:41:17.000Z | 2022-02-25T18:48:37.000Z | import pytest
import numpy as np
from ..binary import binary
import pkg_resources
PATH = pkg_resources.resource_filename(__name__, 'test_data/')
def test_binary():
"Test binary profile pattern"
# load data
X_list = open(PATH+'multiple.txt').read().splitlines()
X_err = 'AGT2HT9'
# get binary
binary_list = binary(X_list, padding=True)
# test binary
assert np.array_equal(binary_list, np.array([
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.]]))
# test ValueError (alphabetical data)
with pytest.raises(ValueError):
binary_err = binary(X_err, padding=True)
# test ValueError (equal length)
with pytest.raises(ValueError):
binary_err = binary(X_list, padding=False) | 49.984127 | 71 | 0.323595 | 630 | 3,149 | 1.587302 | 0.066667 | 0.984 | 1.404 | 1.776 | 0.624 | 0.624 | 0.624 | 0.624 | 0.54 | 0.54 | 0 | 0.249081 | 0.308987 | 3,149 | 63 | 72 | 49.984127 | 0.210478 | 0.040648 | 0 | 0.45098 | 0 | 0 | 0.018391 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 1 | 0.019608 | false | 0 | 0.078431 | 0 | 0.098039 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4edb11dfbb7382ca3b82c1060f0c4e7bc946416 | 42 | py | Python | kivymd/uix/slider/__init__.py | marvelous-benji/KivyMD | 4ab8dd339902597eaa9f8a4f9a80d8a6eb7d6053 | [
"MIT"
] | 1,111 | 2015-07-15T02:31:09.000Z | 2022-03-29T17:22:02.000Z | kivymd/uix/slider/__init__.py | marvelous-benji/KivyMD | 4ab8dd339902597eaa9f8a4f9a80d8a6eb7d6053 | [
"MIT"
] | 706 | 2015-06-10T22:24:13.000Z | 2022-03-31T16:22:39.000Z | kivymd/uix/slider/__init__.py | marvelous-benji/KivyMD | 4ab8dd339902597eaa9f8a4f9a80d8a6eb7d6053 | [
"MIT"
] | 561 | 2015-07-15T04:57:23.000Z | 2022-03-31T17:14:31.000Z | from .slider import MDSlider # NOQA F401
| 21 | 41 | 0.761905 | 6 | 42 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 0.190476 | 42 | 1 | 42 | 42 | 0.852941 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a4f188a67c9a8419bf7e434471628f059fa6c143 | 77 | py | Python | project_restaurant/food/main_dish.py | vasetousa/OOP | e4fedc497dd149c9800613ea11846e0e770d122c | [
"MIT"
] | null | null | null | project_restaurant/food/main_dish.py | vasetousa/OOP | e4fedc497dd149c9800613ea11846e0e770d122c | [
"MIT"
] | null | null | null | project_restaurant/food/main_dish.py | vasetousa/OOP | e4fedc497dd149c9800613ea11846e0e770d122c | [
"MIT"
] | null | null | null | from project_restaurant.food.food import Food
class MainDish(Food):
pass | 19.25 | 45 | 0.792208 | 11 | 77 | 5.454545 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 77 | 4 | 46 | 19.25 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
352af794bc6d96343732d8314151c842400c6191 | 154 | py | Python | hub/__init__.py | NikkiBytes/pending.api | 3c83bb8e413c3032a3a4539d19a779b5f0b67650 | [
"Apache-2.0"
] | 3 | 2019-02-17T23:36:35.000Z | 2022-03-01T16:43:06.000Z | hub/__init__.py | NikkiBytes/pending.api | 3c83bb8e413c3032a3a4539d19a779b5f0b67650 | [
"Apache-2.0"
] | 56 | 2019-01-26T16:34:12.000Z | 2022-03-23T06:57:03.000Z | hub/__init__.py | NikkiBytes/pending.api | 3c83bb8e413c3032a3a4539d19a779b5f0b67650 | [
"Apache-2.0"
] | 6 | 2020-10-22T17:37:54.000Z | 2022-03-01T16:56:55.000Z | from standalone.hub import AutoHubServer
class PendingHubServer(AutoHubServer):
DEFAULT_FEATURES = AutoHubServer.DEFAULT_FEATURES + ["index","api"]
| 25.666667 | 71 | 0.798701 | 15 | 154 | 8.066667 | 0.733333 | 0.330579 | 0.46281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11039 | 154 | 5 | 72 | 30.8 | 0.883212 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35471e9b934b6a8065f2c4b7277738d4853e7205 | 108 | py | Python | test_project/test_project/__init__.py | joncasdam/django-celery-fulldbresult | cc303d9437b4bf3f26334331b4c6b2d5e08619c6 | [
"BSD-3-Clause"
] | 22 | 2015-06-02T09:59:34.000Z | 2016-10-31T10:37:29.000Z | test_project/test_project/__init__.py | joncasdam/django-celery-fulldbresult | cc303d9437b4bf3f26334331b4c6b2d5e08619c6 | [
"BSD-3-Clause"
] | 18 | 2015-05-25T18:48:58.000Z | 2016-10-17T15:50:53.000Z | test_project/test_project/__init__.py | joncasdam/django-celery-fulldbresult | cc303d9437b4bf3f26334331b4c6b2d5e08619c6 | [
"BSD-3-Clause"
] | 1 | 2016-10-13T14:48:38.000Z | 2016-10-13T14:48:38.000Z | # Do not remove! Force import
from test_project.celeryapp import app as celery_app
if celery_app:
pass
| 18 | 52 | 0.777778 | 18 | 108 | 4.5 | 0.777778 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 108 | 5 | 53 | 21.6 | 0.920455 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
102440fd082bf2291a88cb74b1a901a8cd913a14 | 3,872 | py | Python | bindings/python/test.py | libundo/libundo | 550e03f9058c9722393dee216ec5dcf5e2712029 | [
"Apache-2.0"
] | 2 | 2019-09-29T00:47:55.000Z | 2021-08-21T08:14:18.000Z | bindings/python/test.py | libundo/libundo | 550e03f9058c9722393dee216ec5dcf5e2712029 | [
"Apache-2.0"
] | null | null | null | bindings/python/test.py | libundo/libundo | 550e03f9058c9722393dee216ec5dcf5e2712029 | [
"Apache-2.0"
] | null | null | null | import os
import unittest
from libundo import PyUndoTree
def new_tree(name):
if os.path.exists(name):
os.remove(name)
return PyUndoTree(name.encode(), ''.encode())
class PyUndoTreeTestCase(unittest.TestCase):
"""Tests for navigation and serialization of PyUndoTree.
"""
def test_navigate_linear(self):
t = new_tree('test.libundo-session')
# Initial state -- one addition ('1'):
#
# 1 (@)
t.insert('My name is Joe.'.encode(), 0)
self.assertEqual(t.buffer().decode(), 'My name is Joe.')
self.assertEqual(t.head().get('id'), 1)
# Second state -- another addition ('2'):
#
# 1
# \
# 2 (@)
t.insert('My name is actually Bob.'.encode(), 0)
self.assertEqual(t.buffer().decode(), 'My name is actually Bob.')
self.assertEqual(t.head().get('id'), 2)
# Third state -- back to 'A':
#
# 1 (@)
# \
# 2
self.assertEqual(t.undo()['buffer'].decode(), 'My name is Joe.')
self.assertEqual(t.head().get('id'), 1)
# Fourth state -- back to 'B':
#
# 1
# \
# 2 (@)
self.assertEqual(t.redo()['buffer'].decode(), 'My name is actually Bob.')
self.assertEqual(t.head().get('id'), 2)
def test_navigate_branch(self):
t = new_tree('test.libundo-session')
# Initial state -- one addition ('1'):
# 1 (@)
t.insert('My name is Joe.'.encode(), 0)
self.assertEqual(t.buffer().decode(), 'My name is Joe.')
self.assertEqual(t.head().get('id'), 1)
# Second state -- two more additions ('2' & '3'):
#
# 1
# / \
# (@) 3 2
t.insert('My name is actually Bob.'.encode(), 0)
self.assertEqual(t.buffer().decode(), 'My name is actually Bob.')
self.assertEqual(t.head().get('id'), 2)
self.assertEqual(t.head().get('parent'), 1)
self.assertEqual(t.undo()['buffer'].decode(), 'My name is Joe.')
t.insert('My name is Bob.'.encode(), 0)
self.assertEqual(t.buffer().decode(), 'My name is Bob.')
self.assertEqual(t.head().get('id'), 3)
self.assertEqual(t.head().get('parent'), 1)
# Third state -- back to '2':
#
# 1
# / \
# 3 2 (@)
self.assertEqual(t.undo()['buffer'].decode(), 'My name is Joe.')
self.assertEqual(t.head().get('id'), 1)
self.assertEqual(t.redo()['buffer'].decode(), 'My name is actually Bob.')
self.assertEqual(t.head().get('id'), 2)
# Fourth state -- back to '3':
#
# 1
# / \
# (@) 3 2
self.assertEqual(t.undo()['buffer'].decode(), 'My name is Joe.')
t.switch_branch(1)
self.assertEqual(t.redo()['buffer'].decode(), 'My name is Bob.')
def test_serialize_valid(self):
t = new_tree('persist.libundo-session')
t.insert('Hello from libundo (C++)!'.encode(), 0)
self.assertEqual(len(t), 1)
t.save()
t2 = PyUndoTree(
'persist.libundo-session'.encode(),
'Hello from libundo (C++)!'.encode())
self.assertEqual(len(t2), 1)
def test_serialize_invalid(self):
t = new_tree('persist.libundo-session')
t.insert('Hello from libundo (C++)!'.encode(), 0)
self.assertEqual(len(t), 1)
t.save()
t2 = PyUndoTree(
'persist.libundo-session'.encode(),
'Hello from libundo!'.encode())
self.assertEqual(len(t2), 0)
if __name__ == '__main__':
unittest.main()
| 31.225806 | 81 | 0.493543 | 447 | 3,872 | 4.225951 | 0.17226 | 0.214399 | 0.194812 | 0.114346 | 0.767073 | 0.730016 | 0.728428 | 0.681842 | 0.681842 | 0.681842 | 0 | 0.019798 | 0.334711 | 3,872 | 123 | 82 | 31.479675 | 0.713509 | 0.168905 | 0 | 0.622951 | 0 | 0 | 0.193518 | 0.028949 | 0 | 0 | 0 | 0 | 0.442623 | 1 | 0.081967 | false | 0 | 0.04918 | 0 | 0.163934 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10252b28892a0d04505e2f593d3f98fb6f240a3a | 22 | py | Python | japanese_cloze/__init__.py | sarajaksa/anki-addons | 01e4cedca0cca1df11202c52c473a8c35eb5f0b8 | [
"Unlicense"
] | 3 | 2017-03-05T21:53:06.000Z | 2019-03-13T09:50:19.000Z | japanese_cloze/__init__.py | sarajaksa/anki-addons | 01e4cedca0cca1df11202c52c473a8c35eb5f0b8 | [
"Unlicense"
] | 3 | 2017-03-04T16:24:15.000Z | 2018-11-14T15:20:49.000Z | japanese_cloze/__init__.py | sarajaksa/anki-addons | 01e4cedca0cca1df11202c52c473a8c35eb5f0b8 | [
"Unlicense"
] | 1 | 2019-05-12T10:46:25.000Z | 2019-05-12T10:46:25.000Z | from . import clozejp
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1030af2e2bb538d2d2cf0f88d91ca8fd3953a02c | 241 | py | Python | wgeasywall/vars.py | Identeco/WGEasywall | 301a80bf86900414951b96dc9ffd1e94c52220a5 | [
"MIT"
] | null | null | null | wgeasywall/vars.py | Identeco/WGEasywall | 301a80bf86900414951b96dc9ffd1e94c52220a5 | [
"MIT"
] | 1 | 2022-01-30T10:37:20.000Z | 2022-01-30T10:37:20.000Z | wgeasywall/vars.py | araminian/wgeasywall | ee3d6d91f1097aa5f498e66e07ec2629cd198e6d | [
"MIT"
] | null | null | null | import os
def get_wgeasywall_config_location():
home = os.getenv("HOME")
return "{0}{1}".format(home,"/.wgeasywall/")
def get_mongo_configuration_location():
return "{0}{1}".format(get_wgeasywall_config_location(),"mongo.yaml") | 30.125 | 73 | 0.717842 | 32 | 241 | 5.125 | 0.5 | 0.073171 | 0.231707 | 0.329268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018605 | 0.107884 | 241 | 8 | 73 | 30.125 | 0.744186 | 0 | 0 | 0 | 0 | 0 | 0.161157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.166667 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
10895c7ee6abe8fdc0d8844cd3021b6b0a0a1be3 | 748 | py | Python | test/test_utils.py | JohnCrickett/WebScraper | 0bd6edf842153e23373b12b56b909f215ab51f06 | [
"MIT"
] | null | null | null | test/test_utils.py | JohnCrickett/WebScraper | 0bd6edf842153e23373b12b56b909f215ab51f06 | [
"MIT"
] | null | null | null | test/test_utils.py | JohnCrickett/WebScraper | 0bd6edf842153e23373b12b56b909f215ab51f06 | [
"MIT"
] | null | null | null | from scraper.utils import is_valid_url
def test_is_valid_url_valid_urls():
assert is_valid_url("") is False
assert is_valid_url("http://") is False
assert is_valid_url("htp://www.test.com") is False
assert is_valid_url("http:/www.test.com") is False
assert is_valid_url("www.test.com") is False
def test_is_valid_url_invalid_urls():
assert is_valid_url("http://domain.com") is True
assert is_valid_url("https://domain.com") is True
assert is_valid_url("http://www.domain.com") is True
assert is_valid_url("https://www.domain.com") is True
assert is_valid_url("http://www.domain.co.uk") is True
assert is_valid_url("https://www.domain.co.uk") is True
def test_is_redirect():
assert False # TODO
| 32.521739 | 59 | 0.717914 | 130 | 748 | 3.853846 | 0.184615 | 0.195609 | 0.279441 | 0.351297 | 0.842315 | 0.652695 | 0.578842 | 0.516966 | 0.516966 | 0.175649 | 0 | 0 | 0.149733 | 748 | 22 | 60 | 34 | 0.787736 | 0.005348 | 0 | 0 | 0 | 0 | 0.242588 | 0 | 0 | 0 | 0 | 0.045455 | 0.75 | 1 | 0.1875 | true | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1091dd62d445d730ca381ab0f72ab09ab4acbf9b | 47 | py | Python | scrambler/models/__init__.py | willshi88/scrambler | fd77c05824fc99e6965d204c4f5baa1e3b0c4fb3 | [
"MIT"
] | 19 | 2021-04-30T04:12:58.000Z | 2022-03-07T19:09:32.000Z | scrambler/models/__init__.py | willshi88/scrambler | fd77c05824fc99e6965d204c4f5baa1e3b0c4fb3 | [
"MIT"
] | 4 | 2021-07-02T15:07:27.000Z | 2021-08-01T12:41:28.000Z | scrambler/models/__init__.py | willshi88/scrambler | fd77c05824fc99e6965d204c4f5baa1e3b0c4fb3 | [
"MIT"
] | 4 | 2021-06-28T09:41:01.000Z | 2022-02-28T09:13:29.000Z | from scrambler.models.scrambler_models import * | 47 | 47 | 0.87234 | 6 | 47 | 6.666667 | 0.666667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 47 | 1 | 47 | 47 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
10a8deef0defece5a2c1fd2b3e5cc3491f5c0731 | 64 | py | Python | server/iotud/tools/__init__.py | hollwann/dashboard-iot-udistrital | a92c6b65fce5c343abeffcb5badf1f4bfd9ab1f2 | [
"MIT"
] | 2 | 2020-07-02T19:09:12.000Z | 2020-07-05T00:33:55.000Z | server/iotud/tools/__init__.py | hollwann/dashboard-iot-udistrital | a92c6b65fce5c343abeffcb5badf1f4bfd9ab1f2 | [
"MIT"
] | 3 | 2020-07-05T00:55:08.000Z | 2022-02-27T11:29:51.000Z | server/iotud/tools/__init__.py | hollwann/dashboard-iot-udistrital | a92c6b65fce5c343abeffcb5badf1f4bfd9ab1f2 | [
"MIT"
] | null | null | null |
from .http import *
from .mysql import *
from .errors import *
| 12.8 | 21 | 0.703125 | 9 | 64 | 5 | 0.555556 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203125 | 64 | 4 | 22 | 16 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10afa08f4dcdd02f20e083da4e78981d7853603d | 14,357 | py | Python | src/calculate_variable_2d.py | bdrummond1/um_post_proc | 2dc1dcaa164772e09e77cd3f3e7d927f2237228a | [
"MIT"
] | 1 | 2020-04-23T17:06:40.000Z | 2020-04-23T17:06:40.000Z | src/calculate_variable_2d.py | bdrummond1/um_post_proc | 2dc1dcaa164772e09e77cd3f3e7d927f2237228a | [
"MIT"
] | null | null | null | src/calculate_variable_2d.py | bdrummond1/um_post_proc | 2dc1dcaa164772e09e77cd3f3e7d927f2237228a | [
"MIT"
] | null | null | null | # Module to calculate variable
# Looks for requested variable, reads in necessary data and calculates
from construct_variable import *
from constant_user import *
# ---------------------------------------------
# Main function to calculate requested variable
# ---------------------------------------------
def calculate_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband):
# Zonal wind
if varname=='u':
if verbose:
print 'Requested variable is zonal wind'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Meridional wind
elif varname=='v':
if verbose:
print 'Requested variable is meridional wind'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Vertical wind
elif varname=='w':
if verbose:
print 'Requested variable is vertical wind'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Temperature
elif varname=='temp':
if verbose:
print 'Requested variable is temperature'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Surface Temperature
elif varname=='surface_temp':
if verbose:
print 'Requested variable is surface temperature'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Methane mole fraction
elif varname=='ch4_mole_fraction':
if verbose:
print 'Requested variable is methane mole fraction'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Water mole fraction
elif varname=='h2o_mole_fraction':
if verbose:
print 'Requested variable is water mole fraction'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Carbon monoxide mole fraction
elif varname=='co_mole_fraction':
if verbose:
print 'Requested variable is carbon monoxide mole fraction'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Ammonia mole fraction
elif varname=='nh3_mole_fraction':
if verbose:
print 'Requested variable is ammonia mole fraction'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Nitrogen mole fraction
elif varname=='n2_mole_fraction':
if verbose:
print 'Requested variable is nitrogen mole fraction'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Hydrogen cyanide mole fraction
elif varname=='hcn_mole_fraction':
if verbose:
print 'Requested variable is hydrogen cyanide mole fraction'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# carbon dioxide mole fraction
elif varname=='co2_mole_fraction':
if verbose:
print 'Requested variable is carbon dioxide mole fraction'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# OH mole fraction
elif varname=='oh_mole_fraction':
if verbose:
print 'Requested variable is OH mole fraction'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# H mole fraction
elif varname=='h_mole_fraction':
if verbose:
print 'Requested variable is H mole fraction'
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Longwave heating rate in [W/m3]
elif varname=='lwhr_wm3':
if verbose:
print 'Requested variable is longwave heating rate [Wm-3]'
x, y, var = get_lwhr_wm3(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Shortwave heating rate in [W/m3]
elif varname=='swhr_wm3':
if verbose:
print 'Requested variable is shortwave heating rate [Wm-3]'
x, y, var = get_swhr_wm3(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Net heating rate in [W/m3]
elif varname=='nethr_wm3':
if verbose:
print 'Requested variable is net heating rate [Wm-3]'
x, y, var = get_nethr_wm3(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Radiative timescale
elif varname=='rad_timescale':
if verbose:
print 'Requested variable is radiative timescale'
x, y, var = get_rad_timescale(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Contribution function
elif varname=='cf':
if verbose:
print 'Requested variable is contribution function'
x, y, var = get_cf(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
#Zonal advective timescale
elif varname=='u_timescale':
if verbose:
print 'Requested variable is zonal advective timescale'
x, y, var = get_u_timescale(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
#Meridional advective timescale
elif varname=='v_timescale':
if verbose:
print 'Requested variable is meridional advective timescale'
x, y, var = get_v_timescale(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
#Vertical advective timescale
elif varname=='w_timescale':
if verbose:
print 'Requested variable is vertical advective timescale'
x, y, var = get_w_timescale(fname,fname_keys,fname_spec,varname,time_1,time_2,lon,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
else:
print 'Error: calculate_variable'
print 'variable not implemented: ',varname
exit()
return x, y, var
# ---------------------------------------------
# Function to calculate longwave heating rate [W/m3]
# ---------------------------------------------
def get_lwhr_wm3(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband):
# Read heating rates
varname_loc = 'lwhr'
x, y, lwhr = construct_variable_2d(fname,fname_keys,fname_spec,varname_loc,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Read temperature
varname_loc = 'temp'
x, y, temp = construct_variable_2d(fname,fname_keys,fname_spec,varname_loc,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Plot type where y is pressure
if plot_type=='meridional_mean' or plot_type=='zonal_mean' or plot_type=='pressure_latitude' or plot_type=='pressure_longitude':
for i in range(x.size):
# Calculate mass density from ideal gas law
density = y/rspecific/temp[:,i]
# Calculate longwave heating rate in [W/m3]
lwhr[:,i] = lwhr[:,i]*cpspecific*density
else:
print 'Error: get_lwhr_wm3'
print 'Plot type ', plot_type, ' not implemented'
exit()
return x, y, lwhr
# ---------------------------------------------
# Function to calculate shortwave heating rate [W/m3]
# ---------------------------------------------
def get_swhr_wm3(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband):
# Read heating rates
varname_loc = 'swhr'
x, y, swhr = construct_variable_2d(fname,fname_keys,fname_spec,varname_loc,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Read temperature
varname_loc = 'temp'
x, y, temp = construct_variable_2d(fname,fname_keys,fname_spec,varname_loc,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Plot type where y is pressure
if plot_type=='meridional_mean' or plot_type=='zonal_mean' or plot_type=='pressure_latitude' or plot_type=='pressure_longitude':
for i in range(x.size):
# Calculate mass density from ideal gas law
density = y/rspecific/temp[:,i]
# Calculate longwave heating rate in [W/m3]
swhr[:,i] = swhr[:,i]*cpspecific*density
else:
print 'Error: get_swhr_wm3'
print 'Plot type ', plot_type, ' not implemented'
exit()
return x, y, swhr
# ---------------------------------------------
# Function to calculate net heating rate [W/m3]
# ---------------------------------------------
def get_nethr_wm3(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband):
# Get shortwave heating rate
x, y, swhr = get_swhr_wm3(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Get longwave heating rate
x, y, lwhr = get_lwhr_wm3(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Calculate net heating rate
nethr = swhr + lwhr
return x, y, nethr
# ---------------------------------------------
# Function to calculate radiative timescale [s] from Showman and Guillot 2002, Eq 10
# ---------------------------------------------
def get_rad_timescale(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband):
# Read temperature
varname_loc = 'temp'
x, y, temp = construct_variable_2d(fname,fname_keys,fname_spec,varname_loc,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Define new variable array
var = zeros((y.size,x.size))
# Plot type where y is pressure
if plot_type=='meridional_mean' or plot_type=='zonal_mean' or plot_type=='pressure_latitude' or plot_type=='pressure_longitude':
for i in range(x.size):
var[:,i] = surface_gravity*4.*sigma*temp[:,i]**3
var[:,i] = y*cpspecific/var[:,i]
else:
print 'Error: get_swhr_wm3'
print 'Plot type ', plot_type, ' not implemented'
exit()
return x, y, var
# ---------------------------------------------
# Function to calculate normalised contribution function
# ---------------------------------------------
def get_cf(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband):
# Read contribution function
x, y, cf = construct_variable_2d(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
if plot_type=='zonal_mean' or plot_type=='meridional_mean' or plot_type=='pressure_longitude':
# Assume pressure is first dimension
dims = cf.shape
var = zeros(dims)
for i in range(dims[1]):
var[:,i] = cf[:,i]/amax(cf[:,i])
else:
var = cf/amax(cf)
return x, y, var
# ---------------------------------------------
# Function to calculate zonal advective timescale
# ---------------------------------------------
def get_u_timescale(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband):
# Read meridional wind
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,'u',time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Calculate timescale
var = 2.*pi*Rp/abs(var)
return x, y, var
# ---------------------------------------------
# Function to calculate meridional advective timescale
# ---------------------------------------------
def get_v_timescale(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband):
# Read meridional wind
x, y, var = construct_variable_2d(fname,fname_keys,fname_spec,'v',time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Calculate timescale
var = pi*Rp/abs(var)/2.
return x, y, var
# ---------------------------------------------
# Function to calculate vertical advective timescale
# ---------------------------------------------
def get_w_timescale(fname,fname_keys,fname_spec,varname,time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband):
# Read vertical wind
x, y, w = construct_variable_2d(fname,fname_keys,fname_spec,'w',time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
#Get temperature
x, y, temp = construct_variable_2d(fname,fname_keys,fname_spec,'temp',time_1,time_2,lon_request,lat_min,lat_max,
level,plot_type,pressure_grid,vardim,instrument,nband)
# Calculate scale height
H = kb*temp/(mu*amu*surf_gravity)
# Calculate timescale
var = H/abs(w)
return x, y, var
| 39.770083 | 130 | 0.707808 | 2,094 | 14,357 | 4.600764 | 0.072588 | 0.055636 | 0.083039 | 0.084804 | 0.826656 | 0.823438 | 0.806726 | 0.727839 | 0.679053 | 0.668466 | 0 | 0.012304 | 0.139514 | 14,357 | 360 | 131 | 39.880556 | 0.767525 | 0.180261 | 0 | 0.54067 | 0 | 0 | 0.139595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.009569 | null | null | 0.143541 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52c3539589f56263ee977c8deda504e49e6fd5c6 | 31,372 | py | Python | swifter/swifter_tests.py | openafox/swifter | 5d06d136d5ee50c6e1c2331efac33b32fe0183a7 | [
"MIT"
] | null | null | null | swifter/swifter_tests.py | openafox/swifter | 5d06d136d5ee50c6e1c2331efac33b32fe0183a7 | [
"MIT"
] | null | null | null | swifter/swifter_tests.py | openafox/swifter | 5d06d136d5ee50c6e1c2331efac33b32fe0183a7 | [
"MIT"
] | null | null | null | import sys
import unittest
import subprocess
import time
import logging
from math import ceil
from psutil import cpu_count, virtual_memory
import numpy as np
import numpy.testing as npt
import pandas as pd
import swifter
from math import ceil, isclose
from tqdm.auto import tqdm
LOG = logging.getLogger(__name__)
LOG.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)-8s.%(msecs)03d %(levelname)-8s %(name)s:%(lineno)-3s %(message)s")
ch.setFormatter(formatter)
LOG.addHandler(ch)
def math_vec_square(x):
return x ** 2
def math_foo(x, compare_to=1):
return x ** 2 if x < compare_to else x ** (1 / 2)
def math_vec_multiply(row):
return row["x"] * row["y"]
def math_agg_foo(row):
return row.sum() - row.min()
def text_foo(row):
if row["letter"] == "A":
return row["value"] * 3
elif row["letter"] == "B":
return row["value"] ** 3
elif row["letter"] == "C":
return row["value"] / 3
elif row["letter"] == "D":
return row["value"] ** (1 / 3)
elif row["letter"] == "E":
return row["value"]
def clean_text_foo(row):
text = " ".join(row)
text = text.strip()
text = text.replace(" ", "_")
return text
class TestSwifter(unittest.TestCase):
def assertSeriesEqual(self, a, b, msg):
try:
pd.testing.assert_series_equal(a, b)
except AssertionError as e:
raise self.failureException(msg) from e
def assertDataFrameEqual(self, a, b, msg):
try:
pd.testing.assert_frame_equal(a, b)
except AssertionError as e:
raise self.failureException(msg) from e
def assertModinSeriesEqual(self, a, b, msg):
try:
npt.assert_array_almost_equal(a, b)
except AssertionError as e:
raise self.failureException(msg) from e
def assertModinDataFrameEqual(self, a, b, msg):
try:
npt.assert_array_almost_equal(a, b)
except AssertionError as e:
raise self.failureException(msg) from e
def modinSetUp(self):
"""
Imports modin before swifter so that we have access to modin functionality
"""
import modin.pandas as md
import swifter
swifter.register_modin()
self.addTypeEqualityFunc(md.Series, self.assertModinSeriesEqual)
self.addTypeEqualityFunc(md.DataFrame, self.assertModinDataFrameEqual)
return md
def setUp(self):
LOG.info(f"Version {swifter.__version__}")
self.addTypeEqualityFunc(pd.Series, self.assertSeriesEqual)
self.addTypeEqualityFunc(pd.DataFrame, self.assertDataFrameEqual)
self.ncores = cpu_count()
class TestSetup(TestSwifter):
def test_set_npartitions(self):
LOG.info("test_set_npartitions")
for swifter_df, set_npartitions, expected in zip(
[
pd.DataFrame().swifter,
pd.Series().swifter,
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.rolling("1d"),
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.resample("3T"),
],
[None, 1000, 1001, 1002],
[cpu_count() * 2, 1000, 1001, 1002],
):
before = swifter_df._npartitions
swifter_df.set_npartitions(set_npartitions)
actual = swifter_df._npartitions
self.assertEqual(actual, expected)
if set_npartitions is not None:
self.assertNotEqual(before, actual)
def test_set_ray_compute(self):
LOG.info("test_set_ray_compute")
for swifter_df, set_ray_memory, expected in zip(
[
pd.DataFrame().swifter,
pd.Series().swifter,
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.rolling("1d"),
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.resample("3T"),
],
[0.5, 0.99, 52428800],
[ceil(virtual_memory().available * 0.5), ceil(virtual_memory().available * 0.99), 52428800,],
):
before = swifter_df._ray_memory
swifter_df.set_ray_compute(num_cpus=1, memory=set_ray_memory)
actual = swifter_df._ray_memory
self.assertTrue(isclose(actual, expected, rel_tol=0.2))
self.assertNotEqual(before, actual)
def test_cant_set_ray_memory_OOM(self):
LOG.info("test_cant_set_ray_memory_OOM")
for swifter_df, set_ray_memory in zip(
[
pd.DataFrame().swifter,
pd.Series().swifter,
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.rolling("1d"),
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.resample("3T"),
],
[1e100, 1e100, 1e100, 1e100],
):
with self.assertRaises(MemoryError):
swifter_df.set_ray_compute(memory=set_ray_memory)
def test_set_dask_threshold(self):
LOG.info("test_set_dask_threshold")
expected = 1000
for swifter_df in [
pd.DataFrame().swifter,
pd.Series().swifter,
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.rolling("1d"),
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.resample("3T"),
]:
before = swifter_df._dask_threshold
swifter_df.set_dask_threshold(expected)
actual = swifter_df._dask_threshold
self.assertEqual(actual, expected)
self.assertNotEqual(before, actual)
def test_set_dask_scheduler(self):
LOG.info("test_set_dask_scheduler")
expected = "my-scheduler"
for swifter_df in [
pd.DataFrame().swifter,
pd.Series().swifter,
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.rolling("1d"),
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.resample("3T"),
]:
before = swifter_df._scheduler
swifter_df.set_dask_scheduler(expected)
actual = swifter_df._scheduler
self.assertEqual(actual, expected)
self.assertNotEqual(before, actual)
def test_disable_progress_bar(self):
LOG.info("test_disable_progress_bar")
expected = False
for swifter_df in [
pd.DataFrame().swifter,
pd.Series().swifter,
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.rolling("1d"),
pd.DataFrame(
{"x": np.arange(0, 10)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=10)
).swifter.resample("3T"),
]:
before = swifter_df._progress_bar
swifter_df.progress_bar(expected)
actual = swifter_df._progress_bar
self.assertEqual(actual, expected)
self.assertNotEqual(before, actual)
def test_allow_dask_on_strings(self):
LOG.info("test_allow_dask_on_strings")
expected = True
swifter_df = pd.DataFrame().swifter
before = swifter_df._allow_dask_on_strings
swifter_df.allow_dask_on_strings(expected)
actual = swifter_df._allow_dask_on_strings
self.assertEqual(actual, expected)
self.assertNotEqual(before, actual)
def test_stdout_redirected(self):
LOG.info("test_stdout_redirected")
print_messages = subprocess.check_output(
[
sys.executable,
"-c",
"import pandas as pd; import numpy as np; import swifter; "
+ "df = pd.DataFrame({'x': np.random.normal(size=4)}, dtype='float32'); "
+ "df.swifter.progress_bar(enable=False).apply(lambda x: print(x.values))",
],
stderr=subprocess.STDOUT,
)
self.assertEqual(len(print_messages.decode("utf-8").rstrip("\n").split("\n")), 1)
class TestPandasSeries(TestSwifter):
def test_apply_on_empty_series(self):
LOG.info("test_apply_on_empty_series")
series = pd.Series()
pd_val = series.apply(math_foo, compare_to=1)
swifter_val = series.swifter.apply(math_foo, compare_to=1)
self.assertEqual(pd_val, swifter_val) # equality test
def test_nonvectorized_math_apply_on_small_series(self):
LOG.info("test_nonvectorized_math_apply_on_small_series")
df = pd.DataFrame({"x": np.random.normal(size=1000)})
series = df["x"]
tqdm.pandas(desc="Pandas Vec math apply ~ Series")
pd_val = series.progress_apply(math_foo, compare_to=1)
swifter_val = series.swifter.progress_bar(desc="Vec math apply ~ Series").apply(math_foo, compare_to=1)
self.assertEqual(pd_val, swifter_val) # equality test
def test_nonvectorized_math_apply_on_small_series_no_progress_bar(self):
LOG.info("test_nonvectorized_math_apply_on_small_series_no_progress_bar")
df = pd.DataFrame({"x": np.random.normal(size=1000)})
series = df["x"]
pd_val = series.apply(math_foo, compare_to=1)
swifter_val = series.swifter.progress_bar(enable=False).apply(math_foo, compare_to=1)
self.assertEqual(pd_val, swifter_val) # equality test
def test_vectorized_math_apply_on_large_series(self):
LOG.info("test_vectorized_math_apply_on_large_series")
df = pd.DataFrame({"x": np.random.normal(size=1_000_000)})
series = df["x"]
tqdm.pandas(desc="Pandas Vec math apply ~ Series")
start_pd = time.time()
pd_val = series.progress_apply(math_vec_square)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
series.swifter.set_npartitions(4)
.progress_bar(desc="Vec math apply ~ Series")
.apply(math_vec_square, axis=0)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_math_apply_on_large_series(self):
LOG.info("test_nonvectorized_math_apply_on_large_series")
df = pd.DataFrame({"x": np.random.normal(size=10_000_000)})
series = df["x"]
tqdm.pandas(desc="Pandas Nonvec math apply ~ Series")
start_pd = time.time()
pd_val = series.progress_apply(math_foo, compare_to=1)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
series.swifter.set_npartitions(4)
.progress_bar(desc="Nonvec math apply ~ Series")
.apply(math_foo, compare_to=1)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
class TestPandasDataFrame(TestSwifter):
def test_apply_on_empty_dataframe(self):
LOG.info("test_apply_on_empty_dataframe")
df = pd.DataFrame(columns=["x", "y"])
pd_val = df.apply(math_vec_multiply, axis=1)
swifter_val = df.swifter.apply(math_vec_multiply, axis=1)
self.assertEqual(pd_val, swifter_val) # equality test
def test_applymap_on_empty_dataframe(self):
LOG.info("test_applymap_on_empty_dataframe")
df = pd.DataFrame(columns=["x", "y"])
pd_val = df.applymap(math_vec_square)
swifter_val = df.swifter.applymap(math_vec_square)
self.assertEqual(pd_val, swifter_val) # equality test
def test_nonvectorized_math_apply_on_small_dataframe(self):
LOG.info("test_nonvectorized_math_apply_on_small_dataframe")
df = pd.DataFrame({"x": np.random.normal(size=1000), "y": np.random.uniform(size=1000)})
tqdm.pandas(desc="Pandas Nonvec math apply ~ DF")
pd_val = df.progress_apply(math_agg_foo)
swifter_val = df.swifter.progress_bar(desc="Vec math apply ~ DF").apply(math_agg_foo)
self.assertEqual(pd_val, swifter_val) # equality test
def test_nonvectorized_math_apply_on_small_dataframe_no_progress_bar(self):
LOG.info("test_nonvectorized_math_apply_on_small_dataframe_no_progress_bar")
df = pd.DataFrame({"x": np.random.normal(size=1000), "y": np.random.uniform(size=1000)})
pd_val = df.apply(math_agg_foo)
swifter_val = df.swifter.progress_bar(enable=False).apply(math_agg_foo)
self.assertEqual(pd_val, swifter_val) # equality test
def test_vectorized_math_apply_on_large_dataframe(self):
LOG.info("test_vectorized_math_apply_on_large_dataframe")
df = pd.DataFrame({"x": np.random.normal(size=1_000_000), "y": np.random.uniform(size=1_000_000)})
tqdm.pandas(desc="Pandas Vec math apply ~ DF")
start_pd = time.time()
pd_val = df.progress_apply(math_vec_multiply, axis=1)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.set_npartitions(4).progress_bar(desc="Vec math apply ~ DF").apply(math_vec_multiply, axis=1)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_math_apply_on_large_dataframe_broadcast(self):
LOG.info("test_nonvectorized_math_apply_on_large_dataframe_broadcast")
df = pd.DataFrame({"x": np.random.normal(size=250_000), "y": np.random.uniform(size=250_000)})
tqdm.pandas(desc="Pandas Nonvec math apply + broadcast ~ DF")
start_pd = time.time()
pd_val = df.progress_apply(math_agg_foo, axis=1, result_type="broadcast")
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.set_npartitions(4)
.progress_bar(desc="Nonvec math apply + broadcast ~ DF")
.apply(math_agg_foo, axis=1, result_type="broadcast")
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_math_apply_on_large_dataframe_reduce(self):
LOG.info("test_nonvectorized_math_apply_on_large_dataframe_reduce")
df = pd.DataFrame({"x": np.random.normal(size=250_000), "y": np.random.uniform(size=250_000)})
tqdm.pandas(desc="Pandas Nonvec math apply + reduce ~ DF")
start_pd = time.time()
pd_val = df.progress_apply(math_agg_foo, axis=1, result_type="reduce")
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.set_npartitions(4)
.progress_bar(desc="Nonvec math apply + reduce ~ DF")
.apply(math_agg_foo, axis=1, result_type="reduce")
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_text_dask_apply_on_large_dataframe(self):
LOG.info("test_nonvectorized_text_dask_apply_on_large_dataframe")
df = pd.DataFrame({"letter": ["A", "B", "C", "D", "E"] * 200_000, "value": np.random.normal(size=1_000_000)})
tqdm.pandas(desc="Pandas Nonvec text apply ~ DF")
start_pd = time.time()
pd_val = df.progress_apply(text_foo, axis=1)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.allow_dask_on_strings(True)
.set_npartitions(4)
.progress_bar(desc="Nonvec Dask text apply ~ DF")
.apply(text_foo, axis=1)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_text_modin_apply_on_large_dataframe(self):
LOG.info("test_nonvectorized_text_modin_apply_on_large_dataframe")
df = pd.DataFrame({"letter": ["I", "You", "We"] * 1_000_000, "value": ["want to break free"] * 3_000_000})
tqdm.pandas(desc="Pandas Nonvec text apply ~ DF")
start_pd = time.time()
pd_val = df.progress_apply(clean_text_foo, axis=1)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.allow_dask_on_strings(False)
.set_npartitions(4)
.set_ray_compute(num_cpus=2 if self.ncores >= 2 else 1, memory=0.25)
.progress_bar(desc="Nonvec Modin text apply ~ DF")
.apply(clean_text_foo, axis=1)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_text_modin_apply_on_large_dataframe_returns_series(self):
LOG.info("test_nonvectorized_text_modin_apply_on_large_dataframe_returns_series")
df = pd.DataFrame({"str_date": ["2000/01/01 00:00:00"] * 1_000_000})
tqdm.pandas(desc="Pandas Nonvec text apply ~ DF -> Srs")
start_pd = time.time()
pd_val = df.progress_apply(lambda row: row["str_date"].split()[0], axis=1)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.allow_dask_on_strings(False)
.set_npartitions(4)
.set_ray_compute(num_cpus=2 if self.ncores >= 2 else 1, memory=0.25)
.progress_bar(desc="Nonvec Modin text apply ~ DF -> Srs")
.apply(lambda row: row["str_date"].split()[0], axis=1)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_vectorized_math_applymap_on_large_dataframe(self):
LOG.info("test_vectorized_math_applymap_on_large_dataframe")
df = pd.DataFrame({"x": np.random.normal(size=1_000_000), "y": np.random.uniform(size=1_000_000)})
tqdm.pandas(desc="Pandas Vec math applymap ~ DF")
start_pd = time.time()
pd_val = df.progress_applymap(math_vec_square)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.set_npartitions(4).progress_bar(desc="Vec math applymap ~ DF").applymap(math_vec_square)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_math_applymap_on_large_dataframe(self):
LOG.info("test_nonvectorized_math_applymap_on_large_dataframe")
df = pd.DataFrame({"x": np.random.normal(size=5_000_000), "y": np.random.uniform(size=5_000_000)})
tqdm.pandas(desc="Pandas Nonvec math applymap ~ DF")
start_pd = time.time()
pd_val = df.progress_applymap(math_foo)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = df.swifter.set_npartitions(4).progress_bar(desc="Nonvec math applymap ~ DF").applymap(math_foo)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_math_applymap_on_small_dataframe(self):
LOG.info("test_nonvectorized_math_applymap_on_small_dataframe")
df = pd.DataFrame({"x": np.random.normal(size=1000), "y": np.random.uniform(size=1000)})
pd_val = df.applymap(math_foo)
swifter_val = df.swifter.set_npartitions(4).applymap(math_foo)
self.assertEqual(pd_val, swifter_val) # equality test
def test_nonvectorized_math_applymap_on_small_dataframe_no_progress_bar(self):
LOG.info("test_nonvectorized_math_applymap_on_small_dataframe_no_progress_bar")
df = pd.DataFrame({"x": np.random.normal(size=1000), "y": np.random.uniform(size=1000)})
pd_val = df.applymap(math_foo)
swifter_val = df.swifter.progress_bar(enable=False).applymap(math_foo)
self.assertEqual(pd_val, swifter_val) # equality test
class TestPandasTransformation(TestSwifter):
def test_rolling_apply_on_empty_dataframe(self):
LOG.info("test_rolling_apply_on_empty_dataframe")
df = pd.DataFrame(columns=["x", "y"])
pd_val = df.rolling(1).apply(math_agg_foo, raw=True)
swifter_val = df.swifter.set_npartitions(4).rolling(1).apply(math_agg_foo, raw=True)
self.assertEqual(pd_val, swifter_val) # equality test
def test_resample_apply_on_empty_dataframe(self):
LOG.info("test_resample_apply_on_empty_dataframe")
df = pd.DataFrame(columns=["x", "y"], index=pd.date_range(start="2020/01/01", periods=0))
pd_val = df.resample("1d").apply(math_agg_foo)
swifter_val = df.swifter.set_npartitions(4).resample("1d").apply(math_agg_foo)
self.assertEqual(pd_val, swifter_val) # equality test
def test_nonvectorized_math_apply_on_small_rolling_dataframe(self):
LOG.info("test_nonvectorized_math_apply_on_small_rolling_dataframe")
df = pd.DataFrame({"x": np.arange(0, 1000)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=1000))
pd_val = df.rolling("1d").apply(math_agg_foo, raw=True)
swifter_val = (
df.swifter.set_npartitions(4)
.rolling("1d")
.progress_bar(desc="Nonvec math apply ~ Rolling DF")
.apply(math_agg_foo, raw=True)
)
self.assertEqual(pd_val, swifter_val) # equality test
def test_nonvectorized_math_apply_on_small_rolling_dataframe_no_progress_bar(self):
LOG.info("test_nonvectorized_math_apply_on_small_rolling_dataframe_no_progress_bar")
df = pd.DataFrame({"x": np.arange(0, 1000)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=1000))
pd_val = df.rolling("1d").apply(math_agg_foo, raw=True)
swifter_val = (
df.swifter.set_npartitions(4).rolling("1d").progress_bar(enable=False).apply(math_agg_foo, raw=True)
)
self.assertEqual(pd_val, swifter_val) # equality test
def test_vectorized_math_apply_on_large_rolling_dataframe(self):
LOG.info("test_vectorized_math_apply_on_large_rolling_dataframe")
df = pd.DataFrame(
{"x": np.arange(0, 1_000_000)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=1_000_000)
)
start_pd = time.time()
pd_val = df.rolling("1d").apply(max, raw=True)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.set_npartitions(4)
.rolling("1d")
.progress_bar(desc="Vec math apply ~ Rolling DF")
.apply(max, raw=True)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_math_apply_on_large_rolling_dataframe(self):
LOG.info("test_nonvectorized_math_apply_on_large_rolling_dataframe")
df = pd.DataFrame(
{"x": np.arange(0, 7_000_000)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=7_000_000)
)
start_pd = time.time()
pd_val = df.rolling("3T").apply(math_agg_foo, raw=True)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.set_npartitions(7)
.rolling("3T")
.progress_bar(desc="Nonvec math apply ~ Rolling DF")
.apply(math_agg_foo, raw=True)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
def test_nonvectorized_math_apply_on_small_resampler_dataframe(self):
LOG.info("test_nonvectorized_math_apply_on_small_resampler_dataframe")
df = pd.DataFrame({"x": np.arange(0, 1000)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=1000))
pd_val = df.resample("1M").apply(math_agg_foo)
swifter_val = (
df.swifter.set_npartitions(4)
.resample("1M")
.progress_bar(desc="Nonvec math apply ~ Resample DF")
.apply(math_agg_foo)
)
self.assertEqual(pd_val, swifter_val) # equality test
def test_nonvectorized_math_apply_on_large_resampler_dataframe(self):
LOG.info("test_nonvectorized_math_apply_on_large_resampler_dataframe")
df = pd.DataFrame(
{"x": np.arange(0, 1_000_000)}, index=pd.date_range("2019-01-1", "2020-01-1", periods=1_000_000)
)
start_pd = time.time()
pd_val = df.resample("3T").apply(math_agg_foo)
end_pd = time.time()
pd_time = end_pd - start_pd
start_swifter = time.time()
swifter_val = (
df.swifter.set_npartitions(4)
.resample("3T")
.progress_bar(desc="Nonvec math apply ~ Resample DF")
.apply(math_agg_foo)
)
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(pd_val, swifter_val) # equality test
if self.ncores > 1: # speed test
self.assertLess(swifter_time, pd_time)
class TestModinSeries(TestSwifter):
def test_apply_on_empty_modin_series(self):
LOG.info("test_apply_on_empty_series")
md = self.modinSetUp()
series = md.Series()
md_val = series.apply(math_foo, compare_to=1)
swifter_val = series.swifter.apply(math_foo, compare_to=1)
self.assertEqual(md_val, swifter_val) # equality test
def test_nonvectorized_modin_apply_on_small_series(self):
LOG.info("test_nonvectorized_modin_apply_on_small_series")
md = self.modinSetUp()
df = md.Series(np.random.normal(size=200_000), name="x")
md_val = df.apply(math_foo)
swifter_val = df.swifter.set_npartitions(4).apply(math_foo)
self.assertEqual(md_val, swifter_val) # equality test
def test_vectorized_modin_apply_on_large_series(self):
LOG.info("test_vectorized_modin_apply_on_large_series")
md = self.modinSetUp()
df = md.Series(np.random.uniform(size=20_000_000), name="x")
start_md = time.time()
md_val = df.apply(math_vec_square, axis=0)
md_pd_val = md_val._to_pandas() # We have to bring it into pandas to confirm swifter apply speed is quicker
end_md = time.time()
md_time = end_md - start_md
start_swifter = time.time()
swifter_val = df.swifter.set_npartitions(4).apply(math_vec_square)
swifter_pd_val = (
swifter_val._to_pandas()
) # We have to bring it into pandas to confirm swifter apply speed is quicker
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(md_val, swifter_val) # equality test
self.assertEqual(md_pd_val, swifter_pd_val) # equality test after converting to pandas
self.assertLess(swifter_time, md_time) # speed test
class TestModinDataFrame(TestSwifter):
def test_apply_on_empty_modin_dataframe(self):
LOG.info("test_apply_on_empty_series")
md = self.modinSetUp()
df = md.DataFrame()
md_val = df.apply(math_foo, compare_to=1)
swifter_val = df.swifter.apply(math_foo, compare_to=1)
self.assertEqual(md_val, swifter_val) # equality test
def test_nonvectorized_modin_apply_on_small_dataframe(self):
LOG.info("test_nonvectorized_modin_apply_on_small_dataframe")
md = self.modinSetUp()
df = md.DataFrame({"letter": ["A", "B", "C", "D", "E"] * 200_000, "value": np.random.normal(size=1_000_000)})
md_val = df.apply(text_foo, axis=1)
swifter_val = df.swifter.set_npartitions(4).apply(text_foo, axis=1)
self.assertEqual(md_val, swifter_val) # equality test
def test_vectorized_modin_apply_on_large_dataframe(self):
LOG.info("test_vectorized_modin_apply_on_large_dataframe")
md = self.modinSetUp()
df = md.DataFrame({"x": np.random.normal(size=1_000_000), "y": np.random.uniform(size=1_000_000)})
start_md = time.time()
md_val = df.apply(math_vec_square, axis=1)
md_pd_val = md_val._to_pandas() # We have to bring it into pandas to confirm swifter apply speed is quicker
end_md = time.time()
md_time = end_md - start_md
start_swifter = time.time()
swifter_val = df.swifter.set_npartitions(4).apply(math_vec_square, axis=1)
swifter_pd_val = (
swifter_val._to_pandas()
) # We have to bring it into pandas to confirm swifter apply speed is quicker
end_swifter = time.time()
swifter_time = end_swifter - start_swifter
self.assertEqual(md_val, swifter_val) # equality test
self.assertEqual(md_pd_val, swifter_pd_val) # equality test after converting to pandas
self.assertLess(swifter_time, md_time) # speed test
| 41.552318 | 117 | 0.637639 | 4,166 | 31,372 | 4.515603 | 0.06097 | 0.036147 | 0.024559 | 0.032692 | 0.854986 | 0.831172 | 0.80539 | 0.77408 | 0.752339 | 0.710929 | 0 | 0.034189 | 0.247609 | 31,372 | 754 | 118 | 41.607427 | 0.762794 | 0.034426 | 0 | 0.525903 | 0 | 0.00471 | 0.119378 | 0.062783 | 0 | 0 | 0 | 0 | 0.125589 | 1 | 0.083203 | false | 0 | 0.025118 | 0.006279 | 0.136578 | 0.00471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52c65b584dca1b15d65c17e58570f1abcb3c3ea8 | 99 | py | Python | pyUtilities/__init__.py | gregmoille/InstrumentControl | 4cc8477e36f7c4ad4bf4f54036fdd8dd985b4133 | [
"MIT"
] | 3 | 2018-05-02T20:14:15.000Z | 2020-10-18T03:57:09.000Z | pyUtilities/__init__.py | gregmoille/InstrumentControl | 4cc8477e36f7c4ad4bf4f54036fdd8dd985b4133 | [
"MIT"
] | 1 | 2019-05-23T15:21:08.000Z | 2019-05-23T15:21:08.000Z | pyUtilities/__init__.py | gregmoille/InstrumentControl | 4cc8477e36f7c4ad4bf4f54036fdd8dd985b4133 | [
"MIT"
] | 2 | 2019-05-16T20:36:25.000Z | 2020-09-22T18:26:49.000Z | from .createpyqtgraph import CreatePyQtGraph, ReplaceData, ShowDataTip, SetPen, PlotDownSampleTrace | 99 | 99 | 0.878788 | 8 | 99 | 10.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070707 | 99 | 1 | 99 | 99 | 0.945652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
52e5048f8855d8bd38d768b87fddab7eaa80678c | 332 | py | Python | classy_text/__init__.py | mt-edwards/classy-text | 5d33f935e4879f4649d08d927d8428a3cec11a29 | [
"MIT"
] | null | null | null | classy_text/__init__.py | mt-edwards/classy-text | 5d33f935e4879f4649d08d927d8428a3cec11a29 | [
"MIT"
] | null | null | null | classy_text/__init__.py | mt-edwards/classy-text | 5d33f935e4879f4649d08d927d8428a3cec11a29 | [
"MIT"
] | null | null | null | from .batch_train_sequence_model import *
from .build_model import *
from .explore_data import *
from .integration_test import *
from .load_data import *
from .train_fine_tuned_sequence_model import *
from .train_ngram_model import *
from .train_sequence_model import *
from .tune_ngram_model import *
from .vectorize_data import *
| 30.181818 | 46 | 0.819277 | 48 | 332 | 5.291667 | 0.354167 | 0.354331 | 0.354331 | 0.271654 | 0.220472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120482 | 332 | 10 | 47 | 33.2 | 0.869863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e121c1e7c70ee9a94d9b7704b1eb0956ee91db9 | 199 | py | Python | keras/data_loaders/__init__.py | Abhay242/language-identification- | 4b05f6cba588bc4862a3034911407f5f503db0d0 | [
"MIT"
] | 3 | 2019-08-20T08:02:21.000Z | 2020-10-17T17:45:13.000Z | keras/data_loaders/__init__.py | Abhay242/language-identification- | 4b05f6cba588bc4862a3034911407f5f503db0d0 | [
"MIT"
] | 13 | 2020-01-28T22:32:17.000Z | 2022-02-10T00:01:56.000Z | keras/data_loaders/__init__.py | Abhay242/language-identification- | 4b05f6cba588bc4862a3034911407f5f503db0d0 | [
"MIT"
] | null | null | null | from .csv_loader import CSVLoader
from .image_loader import ImageLoader
from .spectrogram2 import Spectrogram2Loader
from .DirectoryLoader import DirectoryLoader
#from rosa_loader import RosaLoader
| 28.428571 | 44 | 0.869347 | 23 | 199 | 7.391304 | 0.521739 | 0.211765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.105528 | 199 | 6 | 45 | 33.166667 | 0.94382 | 0.170854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eaf6fecf0531cfcb476a80b99ff5810d7670521d | 21,279 | py | Python | tests/google/appengine/api/modules/modules_stub_test.py | phil-lopreiato/appengine-python-standard | 5e2c400a24d299bb86e98f755a6ef510b4e1e0df | [
"Apache-2.0"
] | 28 | 2021-01-06T19:55:21.000Z | 2022-03-28T09:41:08.000Z | tests/google/appengine/api/modules/modules_stub_test.py | SOFTWARESOLUTONS-PVT-LIMITED/appengine-python-standard | 530a54b0fc0eb74d9dc29b19b7c4cdfab0556ebc | [
"Apache-2.0"
] | 13 | 2021-06-17T09:38:17.000Z | 2022-03-11T01:12:33.000Z | tests/google/appengine/api/modules/modules_stub_test.py | SOFTWARESOLUTONS-PVT-LIMITED/appengine-python-standard | 530a54b0fc0eb74d9dc29b19b7c4cdfab0556ebc | [
"Apache-2.0"
] | 28 | 2021-03-09T19:27:37.000Z | 2022-01-21T21:18:52.000Z | #!/usr/bin/env python
#
# Copyright 2007 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Tests for google.appengine.api.modules.modules_stub."""
import logging
import google
import mox
from absl.testing import absltest
from google.appengine.api import apiproxy_stub_map
from google.appengine.api import request_info
from google.appengine.api.modules import modules
from google.appengine.api.modules import modules_stub
class ModulesStubTest(absltest.TestCase):
def setUp(self):
self.mox = mox.Mox()
self.request_data = self.mox.CreateMock(request_info.RequestInfo)
self.dispatcher = self.mox.CreateMock(request_info.Dispatcher)
self.stub = modules_stub.ModulesServiceStub(self.request_data)
apiproxy_stub_map.apiproxy = apiproxy_stub_map.GetDefaultAPIProxy()
apiproxy_stub_map.apiproxy.RegisterStub('modules', self.stub)
def tearDown(self):
self.mox.UnsetStubs()
def testGetModules(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_module_names().AndReturn(['default', 'other'])
self.mox.ReplayAll()
self.assertEqual(['default', 'other'], modules.get_modules())
self.mox.VerifyAll()
def testGetVersions(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_versions('default').AndReturn(['1', '2'])
self.mox.ReplayAll()
self.assertEqual(['1', '2'], modules.get_versions('default'))
self.mox.VerifyAll()
def testGetVersions_CurrentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.dispatcher.get_versions('default').AndReturn(['1', '2'])
self.mox.ReplayAll()
self.assertEqual(['1', '2'], modules.get_versions())
self.mox.VerifyAll()
def testGetVersions_ModuleDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_versions('default').AndRaise(
request_info.ModuleDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidModuleError,
modules.get_versions, 'default')
self.mox.VerifyAll()
def testGetDefaultVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_default_version('default').AndReturn('1')
self.mox.ReplayAll()
self.assertEqual('1', modules.get_default_version('default'))
self.mox.VerifyAll()
def testGetDefaultVersion_CurrentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.dispatcher.get_default_version('default').AndReturn('1')
self.mox.ReplayAll()
self.assertEqual('1', modules.get_default_version())
self.mox.VerifyAll()
def testGetDefaultVersion_ModuleDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_default_version('default').AndRaise(
request_info.ModuleDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidModuleError,
modules.get_default_version, 'default')
self.mox.VerifyAll()
def testGetNumInstances(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_num_instances('default', '1').AndReturn(5)
self.mox.ReplayAll()
self.assertEqual(5, modules.get_num_instances('default', '1'))
self.mox.VerifyAll()
def testGetNumInstances_CurrentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.dispatcher.get_num_instances('default', '1').AndReturn(5)
self.mox.ReplayAll()
self.assertEqual(5, modules.get_num_instances(version='1'))
self.mox.VerifyAll()
def testGetNumInstances_CurrentVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('default').AndReturn(['1'])
self.dispatcher.get_num_instances('default', '1').AndReturn(5)
self.mox.ReplayAll()
self.assertEqual(5, modules.get_num_instances(module='default'))
self.mox.VerifyAll()
def testGetNumInstances_CurrentVersionDifferentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('other').AndReturn(['1'])
self.dispatcher.get_num_instances('other', '1').AndReturn(5)
self.mox.ReplayAll()
self.assertEqual(5, modules.get_num_instances(module='other'))
self.mox.VerifyAll()
def testGetNumInstances_CurrentVersionDoesNotExistInOtherModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('other').AndReturn(['2'])
self.dispatcher.get_default_version('other').AndReturn('2')
self.dispatcher.get_num_instances('other', '2').AndReturn(5)
self.mox.ReplayAll()
self.assertEqual(5, modules.get_num_instances(module='other'))
self.mox.VerifyAll()
def testGetNumInstances_CurrentModuleAndVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('default').AndReturn(['1'])
self.dispatcher.get_num_instances('default', '1').AndReturn(5)
self.mox.ReplayAll()
self.assertEqual(5, modules.get_num_instances())
self.mox.VerifyAll()
def testGetNumInstances_ModuleDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('fake').AndRaise(
request_info.ModuleDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.get_num_instances, module='fake')
self.mox.VerifyAll()
def testGetNumInstances_VersionDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_num_instances('fake', '1').AndRaise(
request_info.VersionDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.get_num_instances, module='fake', version='1')
self.mox.VerifyAll()
def testGetNumInstances_AutoScaled(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_num_instances('default', '1').AndRaise(
request_info.NotSupportedWithAutoScalingError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.get_num_instances, module='default', version='1')
self.mox.VerifyAll()
def testSetNumInstances(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.set_num_instances('default', '1', 2)
self.mox.ReplayAll()
modules.set_num_instances(2, 'default', '1')
self.mox.VerifyAll()
def testSetNumInstances_CurrentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.dispatcher.set_num_instances('default', '1', 2)
self.mox.ReplayAll()
modules.set_num_instances(version='1', instances=2)
self.mox.VerifyAll()
def testSetNumInstances_CurrentVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('default').AndReturn(['1'])
self.dispatcher.set_num_instances('default', '1', 2)
self.mox.ReplayAll()
modules.set_num_instances(module='default', instances=2)
self.mox.VerifyAll()
def testSetNumInstances_CurrentVersionDifferentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('other').AndReturn(['1'])
self.dispatcher.set_num_instances('other', '1', 2)
self.mox.ReplayAll()
modules.set_num_instances(module='other', instances=2)
self.mox.VerifyAll()
def testSetNumInstances_CurrentVersionDoesNotExistInOtherModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('other').AndReturn(['2'])
self.dispatcher.get_default_version('other').AndReturn('2')
self.dispatcher.set_num_instances('other', '2', 2)
self.mox.ReplayAll()
modules.set_num_instances(module='other', instances=2)
self.mox.VerifyAll()
def testSetNumInstances_CurrentModuleAndVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('default').AndReturn(['1'])
self.dispatcher.set_num_instances('default', '1', 2)
self.mox.ReplayAll()
modules.set_num_instances(instances=2)
self.mox.VerifyAll()
def testSetNumInstances_ModuleDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.set_num_instances('fake', '1', 2).AndRaise(
request_info.VersionDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.set_num_instances, 2, 'fake', '1')
self.mox.VerifyAll()
def testSetNumInstances_VersionDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.set_num_instances('fake', '1', 2).AndRaise(
request_info.VersionDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.set_num_instances, 2, 'fake', '1')
self.mox.VerifyAll()
def testSetNumInstances_AutoScaled(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.set_num_instances('default', '1', 2).AndRaise(
request_info.NotSupportedWithAutoScalingError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError, modules.set_num_instances,
module='default', version='1', instances=2)
self.mox.VerifyAll()
def testStartVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.start_version('default', '1').AndReturn(5)
self.mox.ReplayAll()
modules.start_version('default', '1')
self.mox.VerifyAll()
def testStartVersion_ModuleDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.start_version('fake', '1').AndRaise(
request_info.ModuleDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.start_version, module='fake', version='1')
self.mox.VerifyAll()
def testStartVersion_VersionDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.start_version('fake', '1').AndRaise(
request_info.VersionDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.start_version, module='fake', version='1')
self.mox.VerifyAll()
def testStartVersion_AutoScaled(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.start_version('default', '1').AndRaise(
request_info.NotSupportedWithAutoScalingError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.start_version, module='default', version='1')
self.mox.VerifyAll()
def testStartVersion_AlreadyStarted(self):
"""Tests that no error is raised if the version is already started."""
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.start_version('default', '1').AndRaise(
request_info.VersionAlreadyStartedError)
self.mox.StubOutWithMock(logging, 'info')
logging.info('The specified module: default, version: 1 is already '
'started.')
self.mox.ReplayAll()
modules.start_version(module='default', version='1')
self.mox.VerifyAll()
def testStopVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.stop_version('default', '1').AndReturn(5)
self.mox.ReplayAll()
modules.stop_version('default', '1')
self.mox.VerifyAll()
def testStopVersion_CurrentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.dispatcher.stop_version('default', '1').AndReturn(5)
self.mox.ReplayAll()
modules.stop_version(version='1')
self.mox.VerifyAll()
def testStopVersion_CurrentVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('default').AndReturn(['1'])
self.dispatcher.stop_version('default', '1').AndReturn(5)
self.mox.ReplayAll()
modules.stop_version(module='default')
self.mox.VerifyAll()
def testStopVersion_CurrentVersionDifferentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('other').AndReturn(['1'])
self.dispatcher.stop_version('other', '1').AndReturn(5)
self.mox.ReplayAll()
modules.stop_version(module='other')
self.mox.VerifyAll()
def testStopVersion_CurrentVersionDoesNotExistInOtherModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('other').AndReturn(['2'])
self.dispatcher.get_default_version('other').AndReturn('2')
self.dispatcher.stop_version('other', '2').AndReturn(5)
self.mox.ReplayAll()
modules.stop_version(module='other')
self.mox.VerifyAll()
def testStopVersion_CurrentModuleAndVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('default').AndReturn(['1'])
self.dispatcher.stop_version('default', '1').AndReturn(5)
self.mox.ReplayAll()
modules.stop_version()
self.mox.VerifyAll()
def testStopVersion_ModuleDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('fake').AndRaise(
request_info.ModuleDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.stop_version, module='fake')
self.mox.VerifyAll()
def testStopVersion_VersionDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.stop_version('fake', '1').AndRaise(
request_info.VersionDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.stop_version, module='fake', version='1')
self.mox.VerifyAll()
def testStopVersion_AutoScaled(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.stop_version('default', '1').AndRaise(
request_info.NotSupportedWithAutoScalingError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidVersionError,
modules.stop_version, module='default', version='1')
self.mox.VerifyAll()
def testStopVersion_AlreadyStopped(self):
"""Tests that no error is raised if the version is already stopped."""
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.stop_version('default', '1').AndRaise(
request_info.VersionAlreadyStoppedError)
self.mox.StubOutWithMock(logging, 'info')
logging.info('The specified module: default, version: 1 is already '
'stopped.')
self.mox.ReplayAll()
modules.stop_version('default', '1')
self.mox.VerifyAll()
def testGetHostname(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_hostname('default', '1', '0').AndReturn(
'localhost:8080')
self.mox.ReplayAll()
self.assertEqual('localhost:8080', modules.get_hostname('default', '1',
'0'))
self.mox.VerifyAll()
def testGetHostname_LoadBalancedHostname(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_hostname('default', '1', None).AndReturn(
'localhost:8080')
self.mox.ReplayAll()
self.assertEqual('localhost:8080', modules.get_hostname('default', '1'))
self.mox.VerifyAll()
def testGetHostname_CurrentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.dispatcher.get_hostname('default', '1', None).AndReturn(
'localhost:8080')
self.mox.ReplayAll()
self.assertEqual('localhost:8080', modules.get_hostname(version='1'))
self.mox.VerifyAll()
def testGetHostname_CurrentVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('default').AndReturn(['1'])
self.dispatcher.get_hostname('default', '1', None).AndReturn(
'localhost:8080')
self.mox.ReplayAll()
self.assertEqual('localhost:8080', modules.get_hostname(module='default'))
self.mox.VerifyAll()
def testGetHostname_CurrentVersionDifferentModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('other').AndReturn(['1'])
self.dispatcher.get_hostname('other', '1', None).AndReturn('localhost:8080')
self.mox.ReplayAll()
self.assertEqual('localhost:8080', modules.get_hostname(module='other'))
self.mox.VerifyAll()
def testGetHostname_CurrentVersionDoesNotExistInOtherModule(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('other').AndReturn(['2'])
self.dispatcher.get_default_version('other').AndReturn('2')
self.dispatcher.get_hostname('other', '2', None).AndReturn('localhost:8080')
self.mox.ReplayAll()
self.assertEqual('localhost:8080', modules.get_hostname(module='other'))
self.mox.VerifyAll()
def testGetHostname_CurrentModuleAndVersion(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_module(None).AndReturn('default')
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('default').AndReturn(['1'])
self.dispatcher.get_hostname('default', '1',
None).AndReturn('localhost:8080')
self.mox.ReplayAll()
self.assertEqual('localhost:8080', modules.get_hostname())
self.mox.VerifyAll()
def testGetHostname_ModuleDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.request_data.get_version(None).AndReturn('1')
self.dispatcher.get_versions('fake').AndRaise(
request_info.ModuleDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidModuleError,
modules.get_hostname, module='fake')
self.mox.VerifyAll()
def testGetHostname_VersionDoesNotExist(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_hostname('fake', '1', None).AndRaise(
request_info.VersionDoesNotExistError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidModuleError,
modules.get_hostname, module='fake', version='1')
self.mox.VerifyAll()
def testGetHostname_InvalidInstance(self):
self.request_data.get_dispatcher().AndReturn(self.dispatcher)
self.dispatcher.get_hostname('default', '1', '20').AndRaise(
request_info.InvalidInstanceIdError)
self.mox.ReplayAll()
self.assertRaises(modules.InvalidInstancesError, modules.get_hostname,
module='default', version='1', instance='20')
self.mox.VerifyAll()
if __name__ == '__main__':
absltest.main()
| 42.987879 | 80 | 0.732412 | 2,419 | 21,279 | 6.273667 | 0.06821 | 0.111624 | 0.080061 | 0.093701 | 0.918094 | 0.88093 | 0.848445 | 0.813785 | 0.799157 | 0.790854 | 0 | 0.012171 | 0.135063 | 21,279 | 494 | 81 | 43.074899 | 0.812388 | 0.035293 | 0 | 0.676259 | 0 | 0 | 0.05835 | 0 | 0 | 0 | 0 | 0 | 0.083933 | 1 | 0.1247 | false | 0 | 0.019185 | 0 | 0.146283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d811239a10dc111aca3c8d81f538d5725e14bb95 | 13,121 | py | Python | tests/core/authenticate/test_authenticate.py | marcosflobo/cheetah-api | 9575f92ac59bc69885325b2ecee15b00c2d6a4f1 | [
"BSD-3-Clause"
] | 3 | 2018-02-08T16:38:34.000Z | 2018-11-16T01:44:59.000Z | tests/core/authenticate/test_authenticate.py | marcosflobo/cheetah-api | 9575f92ac59bc69885325b2ecee15b00c2d6a4f1 | [
"BSD-3-Clause"
] | null | null | null | tests/core/authenticate/test_authenticate.py | marcosflobo/cheetah-api | 9575f92ac59bc69885325b2ecee15b00c2d6a4f1 | [
"BSD-3-Clause"
] | null | null | null | from datetime import datetime
from unittest import TestCase
import mock
from cheetahapi.core.authenticate import Authenticate
from cheetahapi.core.db.model import Token
from tests.factories.factory_fixtures import UserFactory
from tests.factories.factory_fixtures import TokenFactory
exp_token = TokenFactory()
exp_user = UserFactory()
class TestAuthenticate(TestCase):
"""Tests for authenticate module."""
user_id = "1"
user = "moe"
pw = "pass"
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_today_date",
return_value=datetime.strptime("2018-03-24", "%Y-%m-%d"))
def test_token_has_valid_date(self, mock_get_today_date):
"""Test to check when the token has a valid date and it does not exceed the number of days when it's valid"""
token_creation_date_string = "2018-03-23 00:00:01.377000"
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
auth.set_token_days_valid(1)
ret = auth.token_date_not_expired(token_creation_date_string)
mock_get_today_date.assert_called_once()
self.assertTrue(ret)
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_today_date",
return_value=datetime.strptime("2018-03-24 00:00:00.0", "%Y-%m-%d %H:%M:%S.%f"))
def test_token_has_not_valid_date(self, mock_get_today_date):
"""Test to check when the token has not a valid date and exceeds the number of days when it's valid"""
token_creation_date_string = "2018-03-22 00:00:00.0"
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
auth.set_token_days_valid(1)
ret = auth.token_date_not_expired(token_creation_date_string)
mock_get_today_date.assert_called_once()
self.assertFalse(ret)
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_today_date",
return_value=datetime.strptime("2018-03-24", "%Y-%m-%d"))
def test_is_valid_token(self, mock_get_today_date):
"""Test a token is valid"""
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
auth.set_token_days_valid(1)
exp_get_token_from_db = Token()
exp_get_token_from_db.created = "2018-03-23 00:00:01.377000"
with mock.patch("cheetahapi.core.authenticate.Authenticate.get_token_from_db",
return_value=exp_get_token_from_db):
is_valid = auth.is_valid_token(exp_token)
mock_get_today_date.assert_called_once()
self.assertTrue(is_valid)
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_today_date",
return_value=datetime.strptime("2018-03-24 00:00:00.0", "%Y-%m-%d %H:%M:%S.%f"))
def test_is_invalid_token(self, mock_get_today_date):
"""Test a token is not valid"""
auth = Authenticate()
auth.set_token_days_valid(1)
exp_get_token_from_db = Token()
exp_get_token_from_db.created = "2018-03-22 00:00:00.0"
with mock.patch("cheetahapi.core.authenticate.Authenticate.get_token_from_db",
return_value=exp_get_token_from_db):
is_valid = auth.is_valid_token(exp_token)
mock_get_today_date.assert_called_once()
self.assertFalse(is_valid)
@mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager")
@mock.patch("cheetahapi.core.authenticate.Authenticate.create_new_token")
@mock.patch("cheetahapi.core.authenticate.Authenticate.is_valid_token",
return_value=True)
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_token_user_id",
return_value=exp_token.token)
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_user_from_db",
return_value=exp_user)
def test_authenticate_ok(self, mock_get_user_from_db, mock_get_token_user_id,
mock_is_valid_token, mock_create_new_token, mock_load_db_manager):
"""Test to check authentication process is working"""
auth = Authenticate()
token = auth.authenticate(self.user, self.pw)
self.assertFalse(mock_create_new_token.called)
mock_load_db_manager.assert_called_once()
mock_get_user_from_db.assert_called_once_with(self.user, self.pw)
mock_get_token_user_id.assert_called_once_with(0)
mock_is_valid_token.assert_called_once_with("token-0")
self.assertEqual("token-0", token)
@mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager")
@mock.patch("cheetahapi.core.authenticate.Authenticate.create_new_token",
return_value=exp_token)
@mock.patch("cheetahapi.core.authenticate.Authenticate.is_valid_token",
return_value=True)
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_token_user_id", return_value=None)
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_user_from_db",
return_value=exp_user)
def test_authenticate_ok_create_new_token_from_none_token(self, mock_get_user_from_db, mock_get_token_user_id,
mock_is_valid_token, mock_create_new_token,
mock_load_db_manager):
"""Test to check authentication process is working creating a new token because there was not previous token"""
auth = Authenticate()
token = auth.authenticate(self.user, self.pw)
mock_load_db_manager.assert_called_once()
mock_get_user_from_db.assert_called_once_with(self.user, self.pw)
mock_get_token_user_id.assert_called_once_with(exp_user.id)
mock_create_new_token.assert_called_once_with(exp_user.id)
self.assertFalse(mock_is_valid_token.called)
self.assertEqual(exp_token.token, token.token)
@mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager")
@mock.patch("cheetahapi.core.authenticate.Authenticate.create_new_token",
return_value=exp_token)
@mock.patch("cheetahapi.core.authenticate.Authenticate.is_valid_token",
return_value=False)
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_token_user_id",
return_value=exp_token.token)
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_user_from_db",
return_value=exp_user)
def test_authenticate_ok_create_new_token_from_invalid_token(self, mock_get_user_from_db, mock_get_token_user_id,
mock_is_valid_token, mock_create_new_token,
mock_load_db_manager):
"""Test to check authentication process is working creating a new token because previous token expired"""
auth = Authenticate()
token = auth.authenticate(self.user, self.pw)
mock_load_db_manager.assert_called_once()
mock_get_user_from_db.assert_called_once_with(self.user, self.pw)
mock_get_token_user_id.assert_called_once_with(exp_user.id)
mock_create_new_token.assert_called_once_with(exp_user.id)
mock_is_valid_token.assert_called_once_with(exp_token.token)
self.assertEqual(exp_token.token, token.token)
@mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager")
@mock.patch("cheetahapi.core.authenticate.Authenticate.get_user_from_db",
return_value=None)
def test_authenticate_error_wrong_user_or_passwd(self, mock_get_user_from_db, mock_load_db_manager):
auth = Authenticate()
try:
auth.authenticate(self.user, self.pw)
# To be sure that the exception is raised
self.assertTrue(1 == 0)
except Exception:
self.assertTrue(1 == 1)
mock_load_db_manager.assert_called_once()
mock_get_user_from_db.assert_called_once_with(self.user, self.pw)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.__init__", return_value=None)
def test_get_token_from_db(self, mock_db_init):
"""Test get token from database filtering by token string, which is unique"""
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
with mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.get_token", return_value=exp_token)\
as mock_get_token:
ret = auth.get_token_from_db(exp_token.token)
self.assertEqual(exp_token.token, ret.token)
mock_get_token.assert_called_once_with(exp_token.token)
mock_db_init.assert_called_once()
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.get_user_from_db", return_value=exp_user)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.__init__", return_value=None)
def test_get_user_from_db(self, mock_db_init, mock_get_user_from_db):
"""Test to get a user from the database using username and password"""
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
user_ret = auth.get_user_from_db(exp_user.username, exp_user.pw)
self.assertEqual(exp_user.username, user_ret.username)
mock_db_init.assert_called_once()
mock_get_user_from_db.assert_called_once_with(exp_user.username, exp_user.pw)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.get_user_from_db", return_value=None)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.__init__", return_value=None)
def test_get_user_from_db_not_found(self, mock_db_init, mock_get_user_from_db):
"""Test return None user when the username or password is wrong"""
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
username = "foo"
pw = "bar"
user_ret = auth.get_user_from_db(username, pw)
self.assertEqual(None, user_ret)
mock_db_init.assert_called_once()
mock_get_user_from_db.assert_called_once_with(username, pw)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.get_token_user_id", return_value=exp_token)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.__init__", return_value=None)
def test_get_token_user_id(self, mock_db_init, mock_get_token_user_id):
"""Test get token from a user id that has token"""
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
token_ret = auth.get_token_user_id(exp_user.id)
self.assertEqual(exp_token.token, token_ret.token)
mock_db_init.assert_called_once()
mock_get_token_user_id.assert_called_once_with(exp_user.id)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.get_token_user_id", return_value=None)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.__init__", return_value=None)
def test_get_token_user_id_not_found(self, mock_db_init, mock_get_token_user_id):
"""Test None token from a user id that has NOT token"""
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
user_id = 99999
token_ret = auth.get_token_user_id(user_id)
self.assertEqual(None, token_ret)
mock_db_init.assert_called_once()
mock_get_token_user_id.assert_called_once_with(user_id)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.create_new_token", return_value=exp_token.token)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.__init__", return_value=None)
def test_create_new_token(self, mock_db_init, mock_create_new_token):
"""Test get token from a user id that has token"""
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
token_ret = auth.create_new_token(exp_user.id)
self.assertEqual(exp_token.token, token_ret)
mock_db_init.assert_called_once()
mock_create_new_token.assert_called_once_with(exp_user.id)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.create_new_token", return_value=None)
@mock.patch("cheetahapi.core.db.db_authenticate.DbAuthenticate.__init__", return_value=None)
def test_create_new_token_not_found(self, mock_db_init, mock_create_new_token):
"""Test get token from a user id that has token"""
with mock.patch("cheetahapi.core.authenticate.Authenticate.load_db_manager"):
auth = Authenticate()
user_id = 99999
token_ret = auth.create_new_token(user_id)
self.assertEqual(None, token_ret)
mock_db_init.assert_called_once()
mock_create_new_token.assert_called_once_with(user_id)
if __name__ == '__main__':
unittest.main()
| 54.443983 | 119 | 0.714122 | 1,788 | 13,121 | 4.858501 | 0.073266 | 0.078969 | 0.102797 | 0.124439 | 0.890641 | 0.878554 | 0.85461 | 0.839415 | 0.812824 | 0.800852 | 0 | 0.013012 | 0.191677 | 13,121 | 240 | 120 | 54.670833 | 0.806053 | 0.072937 | 0 | 0.588832 | 0 | 0 | 0.248345 | 0.227829 | 0 | 0 | 0 | 0 | 0.258883 | 1 | 0.076142 | false | 0.010152 | 0.035533 | 0 | 0.13198 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc632a2f7242559c01809154d6e7b9a4ab9a719f | 34,460 | py | Python | hexagdly.py | SKellerML/MAGICML | 681c3df723c5dd18dd3d7bd51155bebf4fcfa9b3 | [
"MIT"
] | 1 | 2020-08-12T10:41:48.000Z | 2020-08-12T10:41:48.000Z | hexagdly.py | SKellerML/MAGICML | 681c3df723c5dd18dd3d7bd51155bebf4fcfa9b3 | [
"MIT"
] | null | null | null | hexagdly.py | SKellerML/MAGICML | 681c3df723c5dd18dd3d7bd51155bebf4fcfa9b3 | [
"MIT"
] | null | null | null | """
This file contains utilities to set up hexagonal convolution and pooling
kernels in PyTorch. The size of the input is abitrary, whereas the layout
from top to bottom (along tensor index 2) has to be of zig-zag-edge shape
and from left to right (along tensor index 3) of armchair-edge shape as
shown below.
__ __ __ __ __ __
/11\__/31\__ . . . |11|21|31|41| . . .
\__/21\__/41\ |__|__|__|__|
/12\__/32\__/ . . . _______|\ |12|22|32|42| . . .
\__/22\__/42\ | \ |__|__|__|__|
\__/ \__/ |_______ /
. . . . . |/ . . . . .
. . . . . . . . . .
. . . . . . . . . .
For more information visit https://github.com/ai4iacts/hexagdly
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.parameter import Parameter
import numpy as np
class HexBase():
def __init__(self):
super(HexBase, self).__init__()
self.hexbase_size = None
self.depth_size = None
self.hexbase_stride = None
self.depth_stride = None
self.input_size_is_known = False
self.odd_columns_slices = []
self.odd_columns_pads = []
self.even_columns_slices = []
self.even_columns_pads = []
self.dimensions = None
self.combine = None
self.process = None
self.kwargs = dict()
def shape_for_odd_columns(self, input_size, kernel_number):
slices = [None, None, None, None]
pads = [0, 0, 0, 0]
# left
pads[0] = kernel_number
# right
pads[1] = max(0, kernel_number - ((input_size[-1] - 1) % (2 * self.hexbase_stride)))
# top
pads[2] = self.hexbase_size - int(kernel_number / 2)
# bottom
constraint = input_size[-2] - 1 - int((input_size[-2] - 1 - int(self.hexbase_stride / 2)) / self.hexbase_stride) * self.hexbase_stride
bottom = (self.hexbase_size - int((kernel_number + 1) / 2)) - constraint
if bottom >= 0:
pads[3] = bottom
else:
slices[1] = bottom
return slices, pads
def shape_for_even_columns(self, input_size, kernel_number):
slices = [None, None, None, None]
pads = [0, 0, 0, 0]
# left
left = kernel_number - self.hexbase_stride
if left >= 0:
pads[0] = left
else:
slices[2] = -left
# right
pads[1] = max(0, kernel_number - ((input_size[-1] - 1 - self.hexbase_stride) % (2 * self.hexbase_stride)))
# top
top_shift = -(kernel_number % 2) if (self.hexbase_stride % 2) == 1 else 0
top = (self.hexbase_size - int(kernel_number / 2)) + top_shift - int(self.hexbase_stride / 2)
if top >= 0:
pads[2] = top
else:
slices[0] = -top
# bottom
bottom_shift = 0 if (self.hexbase_stride % 2) == 1 else -(kernel_number % 2)
pads[3] = max(0, self.hexbase_size - int(kernel_number / 2) + bottom_shift - ((input_size[-2] - int(self.hexbase_stride / 2) - 1) % self.hexbase_stride))
return slices, pads
def get_padded_input(self, input, pads):
if self.dimensions == 2:
return nn.ZeroPad2d(tuple(pads))(input)
elif self.dimensions == 3:
return nn.ConstantPad3d(tuple(pads+[0,0]), 0)(input)
def get_sliced_input(self, input, slices):
if self.dimensions == 2:
return input[:, :, slices[0]:slices[1], slices[2]:slices[3]]
elif self.dimensions == 3:
return input[:, :, :, slices[0]:slices[1], slices[2]:slices[3]]
def get_dilation(self, dilation_2d):
if self.dimensions == 2:
return dilation_2d
elif self.dimensions == 3:
return tuple([1] + list(dilation_2d))
def get_stride(self):
if self.dimensions == 2:
return (self.hexbase_stride, 2 * self.hexbase_stride)
elif self.dimensions == 3:
return (self.depth_stride, self.hexbase_stride, 2 * self.hexbase_stride)
def get_ordered_output(self, input, order):
if self.dimensions == 2:
return input[:, :, :, order]
elif self.dimensions == 3:
return input[:, :, :, :, order]
# general implementation of an operation with a hexagonal kernel
def operation_with_arbitrary_stride(self, input):
assert (input.size(-2) - (self.hexbase_stride // 2) >= 0), 'Too few rows to apply hex conv with the stide that is set'
odd_columns = None
even_columns = None
for i in range(self.hexbase_size + 1):
dilation_base = (1, 1) if i == 0 else (1, 2 * i)
if not self.input_size_is_known:
slices, pads = self.shape_for_odd_columns(input.size(), i)
self.odd_columns_slices.append(slices)
self.odd_columns_pads.append(pads)
slices, pads = self.shape_for_even_columns(input.size(), i)
self.even_columns_slices.append(slices)
self.even_columns_pads.append(pads)
if i == self.hexbase_size:
self.input_size_is_known = True
if odd_columns is None:
odd_columns = self.process(self.get_padded_input(self.get_sliced_input(input,
self.odd_columns_slices[i]),
self.odd_columns_pads[i]),
getattr(self, 'kernel' + str(i)),
dilation=self.get_dilation(dilation_base),
stride=self.get_stride(),
**self.kwargs)
else:
odd_columns = self.combine(odd_columns,
self.process(self.get_padded_input(self.get_sliced_input(input,
self.odd_columns_slices[i]),
self.odd_columns_pads[i]),
getattr(self, 'kernel' + str(i)),
dilation=self.get_dilation(dilation_base),
stride=self.get_stride()))
if even_columns is None:
even_columns = self.process(self.get_padded_input(self.get_sliced_input(input,
self.even_columns_slices[i]),
self.even_columns_pads[i]),
getattr(self, 'kernel' + str(i)),
dilation=self.get_dilation(dilation_base),
stride=self.get_stride(),
**self.kwargs)
else:
even_columns = self.combine(even_columns,
self.process(self.get_padded_input(self.get_sliced_input(input,
self.even_columns_slices[i]),
self.even_columns_pads[i]),
getattr(self, 'kernel' + str(i)),
dilation=self.get_dilation(dilation_base),
stride=self.get_stride()))
concatenated_columns = torch.cat((odd_columns, even_columns), 1+self.dimensions)
n_odd_columns = odd_columns.size(-1)
n_even_columns = even_columns.size(-1)
if n_odd_columns == n_even_columns:
order = [int(i + x * n_even_columns) for i in range(n_even_columns) for x in range(2)]
else:
order = [int(i + x * n_odd_columns) for i in range(n_even_columns) for x in range(2)]
order.append(n_even_columns)
return self.get_ordered_output(concatenated_columns, order)
# a slightly faster, case specific implementation of the hexagonal convolution
def operation_with_single_hexbase_stride(self, input):
columns_mod2 = input.size(-1) % 2
odd_kernels_odd_columns = []
odd_kernels_even_columns = []
even_kernels_all_columns = []
even_kernels_all_columns = self.process(self.get_padded_input(input,
[0, 0, self.hexbase_size, self.hexbase_size]),
self.kernel0,
stride=(1, 1) if self.dimensions == 2 else (self.depth_stride, 1, 1),
**self.kwargs)
if self.hexbase_size >= 1:
odd_kernels_odd_columns = self.process(self.get_padded_input(input,
[1, columns_mod2, self.hexbase_size, self.hexbase_size - 1]),
self.kernel1,
dilation=self.get_dilation((1, 2)),
stride=self.get_stride())
odd_kernels_even_columns = self.process(self.get_padded_input(input,
[0, 1 - columns_mod2, self.hexbase_size - 1, self.hexbase_size]),
self.kernel1,
dilation=self.get_dilation((1, 2)),
stride=self.get_stride())
if self.hexbase_size > 1:
for i in range(2, self.hexbase_size + 1):
if i % 2 == 0:
even_kernels_all_columns = self.combine(even_kernels_all_columns,
self.process(self.get_padded_input(input,
[i, i, self.hexbase_size - int(i / 2), self.hexbase_size - int(i / 2)]),
getattr(self, 'kernel' + str(i)),
dilation=self.get_dilation((1, 2 * i)),
stride=(1, 1) if self.dimensions == 2 else (self.depth_stride, 1, 1)))
else:
x = self.hexbase_size + int((1 - i) / 2)
odd_kernels_odd_columns = self.combine(odd_kernels_odd_columns,
self.process(self.get_padded_input(input,
[i, i - 1 + columns_mod2, x, x - 1]),
getattr(self, 'kernel' + str(i)),
dilation=self.get_dilation((1, 2 * i)),
stride=self.get_stride()))
odd_kernels_even_columns = self.combine(odd_kernels_even_columns,
self.process(self.get_padded_input(input,
[i - 1, i - columns_mod2, x - 1, x]),
getattr(self, 'kernel' + str(i)),
dilation=self.get_dilation((1, 2 * i)),
stride=self.get_stride()))
odd_kernels_concatenated_columns = torch.cat((odd_kernels_odd_columns, odd_kernels_even_columns), 1+self.dimensions)
n_odd_columns = odd_kernels_odd_columns.size(-1)
n_even_columns = odd_kernels_even_columns.size(-1)
if n_odd_columns == n_even_columns:
order = [int(i + x * n_even_columns) for i in range(n_even_columns) for x in range(2)]
else:
order = [int(i + x * n_odd_columns) for i in range(n_even_columns) for x in range(2)]
order.append(n_even_columns)
return self.combine(even_kernels_all_columns , self.get_ordered_output(odd_kernels_concatenated_columns, order))
class Conv2d(HexBase, nn.Module):
r"""Applies a 2D hexagonal convolution`
Args:
in_channels: int: number of input channels
out_channels: int: number of output channels
kernel_size: int: number of layers with neighbouring pixels
covered by the pooling kernel
stride: int: length of strides
bias: bool: add bias if True (default)
debug: bool: switch to debug mode
False: weights are initalised with
kaiming normal, bias with 0.01 (default)
True: weights / bias are set to 1.
Examples::
>>> conv2d = hexagdly.Conv2d(1,3,2,1)
>>> input = torch.randn(1, 1, 4, 2)
>>> output = conv2d(input)
>>> print(output)
"""
def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, bias=True, debug=False):
super(Conv2d, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.hexbase_size = kernel_size
self.hexbase_stride = stride
self.debug = debug
self.bias = bias
self.dimensions = 2
self.process = F.conv2d
self.combine = torch.add
for i in range(self.hexbase_size + 1):
setattr(self, 'kernel' + str(i),
Parameter(torch.Tensor(out_channels, in_channels, 1 + 2 * self.hexbase_size - i, 1 if i==0 else 2)))
if self.bias:
self.bias_tensor = Parameter(torch.Tensor(out_channels))
self.kwargs = {'bias': self.bias_tensor}
else:
self.kwargs = {'bias': None}
self.init_parameters(self.debug)
def init_parameters(self, debug):
if debug:
for i in range(self.hexbase_size + 1):
nn.init.constant_(getattr(self, 'kernel' + str(i)), 1)
if self.bias:
nn.init.constant_(getattr(self, 'kwargs')['bias'], 1.)
else:
for i in range(self.hexbase_size + 1):
nn.init.kaiming_normal_(getattr(self, 'kernel' + str(i)))
if self.bias:
nn.init.constant_(getattr(self, 'kwargs')['bias'], 0.01)
def forward(self, input):
if self.hexbase_stride == 1:
return self.operation_with_single_hexbase_stride(input)
else:
return self.operation_with_arbitrary_stride(input)
def __repr__(self):
s = ('{name}({in_channels}, {out_channels}, kernel_size={hexbase_size}'
', stride={hexbase_stride}')
if self.bias is False:
s += ', bias=False'
if self.debug is True:
s += ', debug=True'
s += ')'
return s.format(name=self.__class__.__name__, **self.__dict__)
class Conv2d_CustomKernel(HexBase, nn.Module):
r"""Applies a 2D hexagonal convolution with custom kernels`
Args:
sub_kernels: list: list containing sub-kernels as numpy arrays
stride: int: length of strides
bias: array: numpy array with biases (default: None)
requires_grad: bool: trainable parameters if True (default: False)
debug: bool: If True a kernel of size one with all values
set to 1 will be applied as well as no bias
(default: False)
Examples::
Given in the online repository https://github.com/ai4iacts/hexagdly
"""
def __init__(self, sub_kernels=[], stride=1, bias=None, requires_grad=False, debug=False):
super(Conv2d_CustomKernel, self).__init__()
self.sub_kernels = sub_kernels
self.bias_array = bias
self.hexbase_stride = stride
self.requires_grad = requires_grad
self.debug = debug
self.dimensions = 2
self.process = F.conv2d
self.combine = torch.add
self.init_parameters(self.debug)
def init_parameters(self, debug):
if debug or len(self.sub_kernels)==0:
print('The debug kernel is used for {name}!'.format(name=self.__class__.__name__))
self.sub_kernels = [np.array([[[[1],[1],[1]]]]),
np.array([[[[1,1],[1,1]]]])]
self.hexbase_size = len(self.sub_kernels) - 1
self.check_sub_kernels()
for i in range(self.hexbase_size + 1):
setattr(self, 'kernel' + str(i),
Parameter(torch.from_numpy(self.sub_kernels[i]).type(torch.FloatTensor),
requires_grad=self.requires_grad))
if not debug and not self.bias_array is None:
self.check_bias()
self.bias_tensor = Parameter(torch.from_numpy(self.bias_array).type(torch.FloatTensor),
requires_grad=self.requires_grad)
self.kwargs = {'bias': self.bias_tensor}
self.bias = True
else:
self.bias = False
if not self.bias_array is None:
print('{name}: Bias is not used in debug mode!'.format(name=self.__class__.__name__))
def check_sub_kernels(self):
for i in range(self.hexbase_size + 1):
assert type(self.sub_kernels[i]).__module__ == np.__name__, 'sub-kernels must be given as numpy arrays'
assert len(self.sub_kernels[i].shape)==4, 'sub-kernels must be of rank 4 for a 2d convolution'
if i==0:
assert self.sub_kernels[i].shape[3]==1, 'first sub-kernel must have only 1 column'
assert self.sub_kernels[i].shape[2]==2 * self.hexbase_size + 1, 'first sub-kernel must have 2* (kernel size) + 1 rows'
self.out_channels = self.sub_kernels[i].shape[0]
self.in_channels = self.sub_kernels[i].shape[1]
else:
assert self.sub_kernels[i].shape[3]==2, 'sub-kernel {}: all but the first sub-kernel must have 2 columns'.format(i)
assert self.sub_kernels[i].shape[2]==2 * self.hexbase_size + 1 - i, '{}. sub-kernel must have 2* (kernel size) + 1 - {} rows'.format(i,i)
assert self.sub_kernels[i].shape[0]==self.out_channels, 'sub-kernel {}: out channels are not consistent'.format(i)
assert self.sub_kernels[i].shape[1]==self.in_channels, 'sub-kernel {}: in channels are not consistent'.format(i)
def check_bias(self):
assert type(self.bias_array).__module__ == np.__name__, 'bias must be given as a numpy array'
assert len(self.bias_array.shape)==1, 'bias must be of rank 1'
assert self.bias_array.shape[0]==self.out_channels, 'bias must have length equal to number of out channels'
def forward(self, input):
if self.hexbase_stride == 1:
return self.operation_with_single_hexbase_stride(input)
else:
return self.operation_with_arbitrary_stride(input)
def __repr__(self):
s = ('{name}({in_channels}, {out_channels}, kernel_size={hexbase_size}'
', stride={hexbase_stride}')
if self.bias is False:
s += ', bias=False'
if self.debug is True:
s += ', debug=True'
s += ')'
return s.format(name=self.__class__.__name__, **self.__dict__)
class Conv3d(HexBase, nn.Module):
r"""Applies a 3D hexagonal convolution`
Args:
in_channels: int: number of input channels
out_channels: int: number of output channels
kernel_size: int, tuple: number of layers with neighbouring pixels
covered by the pooling kernel
int: same number of layers in all dimensions
tuple of two ints:
1st int: layers in depth
2nd int: layers in hexagonal base
stride: int, tuple: length of strides
int: same lenght of strides in each dimension
tuple of two ints:
1st int: length of strides in depth
2nd int: length of strides in hexagonal base
bias: bool: add bias if True (default)
debug: bool: switch to debug mode
False: weights are initalised with
kaiming normal, bias with 0.01 (default)
True: weights / bias are set to 1.
Examples::
>>> conv3d = hexagdly.Conv3d((1,1), (2,2))
>>> input = torch.randn(1, 1, 6, 5, 4)
>>> output = conv3d(input)
>>> print(output)
"""
def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, bias=True, debug=False):
super(Conv3d, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
if isinstance(kernel_size, int):
self.hexbase_size = kernel_size
self.depth_size = kernel_size
elif isinstance(kernel_size, tuple):
assert len(kernel_size) == 2, 'Need a tuple of two ints to set kernel size'
self.hexbase_size = kernel_size[1]
self.depth_size = kernel_size[0]
if isinstance(stride, int):
self.hexbase_stride = stride
self.depth_stride = stride
elif isinstance(stride, tuple):
assert len(stride) == 2, 'Need a tuple of two ints to set stride'
self.hexbase_stride = stride[1]
self.depth_stride = stride[0]
self.debug = debug
self.bias = bias
self.dimensions = 3
self.process = F.conv3d
self.combine = torch.add
for i in range(self.hexbase_size + 1):
setattr(self, 'kernel' + str(i),
Parameter(torch.Tensor(out_channels, in_channels, self.depth_size, 1 + 2 * self.hexbase_size - i, 1 if i==0 else 2)))
if self.bias:
self.bias_tensor = Parameter(torch.Tensor(out_channels))
self.kwargs = {'bias': self.bias_tensor}
else:
self.kwargs = {'bias': None}
self.init_parameters(self.debug)
def init_parameters(self, debug):
if debug:
for i in range(self.hexbase_size + 1):
nn.init.constant_(getattr(self, 'kernel' + str(i)), 1)
if self.bias:
nn.init.constant_(getattr(self, 'kwargs')['bias'], 1.)
else:
for i in range(self.hexbase_size + 1):
nn.init.kaiming_normal_(getattr(self, 'kernel' + str(i)))
if self.bias:
nn.init.constant_(getattr(self, 'kwargs')['bias'], 0.01)
def forward(self, input):
if self.hexbase_stride == 1:
return self.operation_with_single_hexbase_stride(input)
else:
return self.operation_with_arbitrary_stride(input)
def __repr__(self):
s = ('{name}({in_channels}, {out_channels}, kernel_size=({depth_size}, {hexbase_size})'
', stride=({depth_stride}, {hexbase_stride})')
if self.bias is False:
s += ', bias=False'
if self.debug is True:
s += ', debug=True'
s += ')'
return s.format(name=self.__class__.__name__, **self.__dict__)
class Conv3d_CustomKernel(HexBase, nn.Module):
r"""Applies a 3D hexagonal convolution with custom kernels`
Args:
sub_kernels: list: list containing sub-kernels as numpy arrays
stride: stride: int, tuple: length of strides
int: same lenght of strides in each dimension
tuple of two ints:
1st int: length of strides in depth
2nd int: length of strides in hexagonal base
requires_grad: bool: trainable parameters if True (default: False)
debug: bool: If True a kernel of size one with all values
set to 1 will be applied as well as no bias
(default: False)
Examples::
Given in the online repository https://github.com/ai4iacts/hexagdly
"""
def __init__(self, sub_kernels=[], stride=1, bias=None, requires_grad=False, debug=False):
super(Conv3d_CustomKernel, self).__init__()
self.sub_kernels = sub_kernels
self.bias_array = bias
if isinstance(stride, int):
self.hexbase_stride = stride
self.depth_stride = stride
elif isinstance(stride, tuple):
assert len(stride) == 2, 'Need a tuple of two ints to set stride'
self.hexbase_stride = stride[1]
self.depth_stride = stride[0]
self.requires_grad = requires_grad
self.debug = debug
self.dimensions = 3
self.process = F.conv3d
self.combine = torch.add
self.init_parameters(self.debug)
def init_parameters(self, debug):
if debug or len(self.sub_kernels)==0:
print('The debug kernel is used for {name}!'.format(name=self.__class__.__name__))
self.sub_kernels = [np.array([[[[[1],[1],[1]]]]]),
np.array([[[[[1,1],[1,1]]]]])]
self.hexbase_size = len(self.sub_kernels) - 1
self.check_sub_kernels()
for i in range(self.hexbase_size + 1):
setattr(self, 'kernel' + str(i),
Parameter(torch.from_numpy(self.sub_kernels[i]).type(torch.FloatTensor),
requires_grad=self.requires_grad))
if not debug and not self.bias_array is None:
self.check_bias()
self.bias_tensor = Parameter(torch.from_numpy(self.bias_array).type(torch.FloatTensor),
requires_grad=self.requires_grad)
self.kwargs = {'bias': self.bias_tensor}
self.bias = True
else:
self.bias = False
print('No bias is used for {name}!'.format(name=self.__class__.__name__))
def check_sub_kernels(self):
for i in range(self.hexbase_size + 1):
assert type(self.sub_kernels[i]).__module__ == np.__name__, 'sub-kernels must be given as numpy arrays'
assert len(self.sub_kernels[i].shape)==5, 'sub-kernels must be of rank 5 for a 3d convolution'
if i==0:
assert self.sub_kernels[i].shape[4]==1, 'first sub-kernel must have only 1 column'
assert self.sub_kernels[i].shape[3]==2 * self.hexbase_size + 1, 'first sub-kernel must have 2* (kernel size) + 1 rows'
self.out_channels = self.sub_kernels[i].shape[0]
self.in_channels = self.sub_kernels[i].shape[1]
self.depth_size = self.sub_kernels[i].shape[2]
else:
assert self.sub_kernels[i].shape[4]==2, 'sub-kernel {}: all but the first sub-kernel must have 2 columns'.format(i)
assert self.sub_kernels[i].shape[3]==2 * self.hexbase_size + 1 - i, '{}th sub-kernel must have 2* (kernel size) + 1 - {} rows'.format(i,i)
assert self.sub_kernels[i].shape[0]==self.out_channels, 'sub-kernel {}: out channels are not consistent'.format(i)
assert self.sub_kernels[i].shape[1]==self.in_channels, 'sub-kernel {}: out channels are not consistent'.format(i)
assert self.sub_kernels[i].shape[2]==self.depth_size, 'sub-kernel {}: depths are not consistent'.format(i)
def check_bias(self):
assert type(self.bias_array).__module__ == np.__name__, 'bias must be given as a numpy array'
assert len(self.bias_array.shape)==1, 'bias must be of rank 1'
assert self.bias_array.shape[0]==self.out_channels, 'bias must have length equal to number of out channels'
def forward(self, input):
if self.hexbase_stride == 1:
return self.operation_with_single_hexbase_stride(input)
else:
return self.operation_with_arbitrary_stride(input)
def __repr__(self):
s = ('{name}({in_channels}, {out_channels}, kernel_size=({depth_size}, {hexbase_size})'
', stride=({depth_stride}, {hexbase_stride})')
if self.bias is False:
s += ', bias=False'
if self.debug is True:
s += ', debug=True'
s += ')'
return s.format(name=self.__class__.__name__, **self.__dict__)
class MaxPool2d(HexBase, nn.Module):
r"""Applies a 2D hexagonal max pooling`
Args:
kernel_size: int: number of layers with neighbouring pixels
covered by the pooling kernel
stride: int: length of strides
Examples::
>>> maxpool2d = hexagdly.MaxPool2d(1,2)
>>> input = torch.randn(1, 1, 4, 2)
>>> output = maxpool2d(input)
>>> print(output)
"""
def __init__(self, kernel_size=1, stride=1):
super(MaxPool2d, self).__init__()
self.hexbase_size = kernel_size
self.hexbase_stride = stride
self.dimensions = 2
self.process = F.max_pool2d
self.combine = torch.max
for i in range(self.hexbase_size + 1):
setattr(self, 'kernel' + str(i), (1 + 2 * self.hexbase_size - i, 1 if i==0 else 2))
def forward(self, input):
if self.hexbase_stride == 1:
return self.operation_with_single_hexbase_stride(input)
else:
return self.operation_with_arbitrary_stride(input)
def __repr__(self):
s = ('{name}(kernel_size={hexbase_size}'
', stride={hexbase_stride})')
return s.format(name=self.__class__.__name__, **self.__dict__)
class MaxPool3d(HexBase, nn.Module):
r"""Applies a 3D hexagonal max pooling`
Args:
kernel_size: int, tuple: number of layers with neighbouring pixels
covered by the pooling kernel
int: same number of layers in all dimensions
tuple of two ints:
1st int: layers in depth
2nd int: layers in hexagonal base
stride: int, tuple: length of strides
int: same lenght of strides in each dimension
tuple of two ints:
1st int: length of strides in depth
2nd int: length of strides in hexagonal base
Examples::
>>> maxpool3d = hexagdly.MaxPool3d((1,1), (2,2))
>>> input = torch.randn(1, 1, 6, 5, 4)
>>> output = maxpool3d(input)
>>> print(output)
"""
def __init__(self, kernel_size=1, stride=1):
super(MaxPool3d, self).__init__()
if isinstance(kernel_size, int):
self.hexbase_size = kernel_size
self.depth_size = kernel_size
elif isinstance(kernel_size, tuple):
assert len(kernel_size) == 2, 'Too many parameters'
self.hexbase_size = kernel_size[1]
self.depth_size = kernel_size[0]
if isinstance(stride, int):
self.hexbase_stride = stride
self.depth_stride = stride
elif isinstance(stride, tuple):
assert len(stride) == 2, 'Too many parameters'
self.hexbase_stride = stride[1]
self.depth_stride = stride[0]
self.dimensions = 3
self.process = F.max_pool3d
self.combine = torch.max
for i in range(self.hexbase_size + 1):
setattr(self, 'kernel' + str(i), (self.depth_size, 1 + 2 * self.hexbase_size - i, 1 if i==0 else 2))
def forward(self, input):
if self.hexbase_stride == 1:
return self.operation_with_single_hexbase_stride(input)
else:
return self.operation_with_arbitrary_stride(input)
def __repr__(self):
s = ('{name}(kernel_size=({depth_size}, {hexbase_size})'
', stride=({depth_stride}, {hexbase_stride}))')
return s.format(name=self.__class__.__name__, **self.__dict__)
| 49.018492 | 161 | 0.512362 | 3,866 | 34,460 | 4.338334 | 0.062856 | 0.052468 | 0.042034 | 0.021464 | 0.866146 | 0.813797 | 0.786132 | 0.76115 | 0.739506 | 0.725197 | 0 | 0.020478 | 0.390656 | 34,460 | 702 | 162 | 49.088319 | 0.778265 | 0.171474 | 0 | 0.662526 | 0 | 0 | 0.08118 | 0.014243 | 0 | 0 | 0 | 0 | 0.060041 | 1 | 0.074534 | false | 0 | 0.010352 | 0 | 0.165631 | 0.008282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc75381e8566e1cc8b53fe8a0fa2467c7e98bec8 | 35 | py | Python | colorguard/__init__.py | pwnslinger/colorguard | c2d37f98c6c65c5598576438959ccdffb3e0f76f | [
"BSD-2-Clause"
] | 9 | 2016-08-20T23:39:21.000Z | 2020-11-06T22:44:53.000Z | colorguard/__init__.py | pwnslinger/colorguard | c2d37f98c6c65c5598576438959ccdffb3e0f76f | [
"BSD-2-Clause"
] | 2 | 2017-11-30T21:34:29.000Z | 2021-04-29T17:56:26.000Z | colorguard/__init__.py | pwnslinger/colorguard | c2d37f98c6c65c5598576438959ccdffb3e0f76f | [
"BSD-2-Clause"
] | 11 | 2016-08-21T13:14:57.000Z | 2021-04-29T01:27:33.000Z | from .colorguard import ColorGuard
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.