hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
714965a22b37a5cb4301902949452a3e5b3fe594 | 24 | py | Python | src/hx711_multi/__init__.py | crispyman/hx711-multi | f444fcb1104b5e6e58f368dfb290c207f8ae9a50 | [
"MIT"
] | 5 | 2021-08-03T18:54:54.000Z | 2022-03-30T22:08:21.000Z | src/hx711_multi/__init__.py | crispyman/hx711-multi | f444fcb1104b5e6e58f368dfb290c207f8ae9a50 | [
"MIT"
] | 7 | 2021-06-07T04:42:08.000Z | 2022-03-15T16:29:02.000Z | src/hx711_multi/__init__.py | crispyman/hx711-multi | f444fcb1104b5e6e58f368dfb290c207f8ae9a50 | [
"MIT"
] | 4 | 2021-09-15T16:39:15.000Z | 2021-12-21T10:36:41.000Z | from .hx711 import HX711 | 24 | 24 | 0.833333 | 4 | 24 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 0.125 | 24 | 1 | 24 | 24 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7154dc1d0fe1288a26d9b39a4117c1bbaec0664a | 21,128 | py | Python | sdk/python/pulumi_aws/msk/cluster.py | dixler/pulumi-aws | 88838ed6d412c092717a916b0b5b154f68226c3a | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/msk/cluster.py | dixler/pulumi-aws | 88838ed6d412c092717a916b0b5b154f68226c3a | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/msk/cluster.py | dixler/pulumi-aws | 88838ed6d412c092717a916b0b5b154f68226c3a | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from .. import utilities, tables
class Cluster(pulumi.CustomResource):
arn: pulumi.Output[str]
"""
Amazon Resource Name (ARN) of the MSK Configuration to use in the cluster.
"""
bootstrap_brokers: pulumi.Output[str]
"""
A comma separated list of one or more hostname:port pairs of kafka brokers suitable to boostrap connectivity to the kafka cluster. Only contains value if `client_broker` encryption in transit is set to `PLAINTEXT` or `TLS_PLAINTEXT`.
"""
bootstrap_brokers_tls: pulumi.Output[str]
"""
A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka brokers suitable to boostrap connectivity to the kafka cluster. Only contains value if `client_broker` encryption in transit is set to `TLS_PLAINTEXT` or `TLS`.
"""
broker_node_group_info: pulumi.Output[dict]
"""
Configuration block for the broker nodes of the Kafka cluster.
* `azDistribution` (`str`) - The distribution of broker nodes across availability zones ([documentation](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#clusters-model-brokerazdistribution)). Currently the only valid value is `DEFAULT`.
* `clientSubnets` (`list`) - A list of subnets to connect to in client VPC ([documentation](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#clusters-prop-brokernodegroupinfo-clientsubnets)).
* `ebsVolumeSize` (`float`) - The size in GiB of the EBS volume for the data drive on each broker node.
* `instance_type` (`str`) - Specify the instance type to use for the kafka brokers. e.g. kafka.m5.large. ([Pricing info](https://aws.amazon.com/msk/pricing/))
* `security_groups` (`list`) - A list of the security groups to associate with the elastic network interfaces to control who can communicate with the cluster.
"""
client_authentication: pulumi.Output[dict]
"""
Configuration block for specifying a client authentication. See below.
* `tls` (`dict`) - Configuration block for specifying TLS client authentication. See below.
* `certificateAuthorityArns` (`list`) - List of ACM Certificate Authority Amazon Resource Names (ARNs).
"""
cluster_name: pulumi.Output[str]
"""
Name of the MSK cluster.
"""
configuration_info: pulumi.Output[dict]
"""
Configuration block for specifying a MSK Configuration to attach to Kafka brokers. See below.
* `arn` (`str`) - Amazon Resource Name (ARN) of the MSK Configuration to use in the cluster.
* `revision` (`float`) - Revision of the MSK Configuration to use in the cluster.
"""
current_version: pulumi.Output[str]
"""
Current version of the MSK Cluster used for updates, e.g. `K13V1IB3VIYZZH`
* `encryption_info.0.encryption_at_rest_kms_key_arn` - The ARN of the KMS key used for encryption at rest of the broker data volumes.
"""
encryption_info: pulumi.Output[dict]
"""
Configuration block for specifying encryption. See below.
* `encryptionAtRestKmsKeyArn` (`str`) - You may specify a KMS key short ID or ARN (it will always output an ARN) to use for encrypting your data at rest. If no key is specified, an AWS managed KMS ('aws/msk' managed service) key will be used for encrypting the data at rest.
* `encryptionInTransit` (`dict`) - Configuration block to specify encryption in transit. See below.
* `clientBroker` (`str`) - Encryption setting for data in transit between clients and brokers. Valid values: `TLS`, `TLS_PLAINTEXT`, and `PLAINTEXT`. Default value: `TLS_PLAINTEXT`.
* `inCluster` (`bool`) - Whether data communication among broker nodes is encrypted. Default value: `true`.
"""
enhanced_monitoring: pulumi.Output[str]
"""
Specify the desired enhanced MSK CloudWatch monitoring level. See [Monitoring Amazon MSK with Amazon CloudWatch](https://docs.aws.amazon.com/msk/latest/developerguide/monitoring.html)
"""
kafka_version: pulumi.Output[str]
"""
Specify the desired Kafka software version.
"""
number_of_broker_nodes: pulumi.Output[float]
"""
The desired total number of broker nodes in the kafka cluster. It must be a multiple of the number of specified client subnets.
"""
open_monitoring: pulumi.Output[dict]
"""
Configuration block for JMX and Node monitoring for the MSK cluster. See below.
* `prometheus` (`dict`) - Configuration block for Prometheus settings for open monitoring. See below.
* `jmxExporter` (`dict`) - Configuration block for JMX Exporter. See below.
* `enabledInBroker` (`bool`) - Indicates whether you want to enable or disable the Node Exporter.
* `nodeExporter` (`dict`) - Configuration block for Node Exporter. See below.
* `enabledInBroker` (`bool`) - Indicates whether you want to enable or disable the Node Exporter.
"""
tags: pulumi.Output[dict]
"""
A mapping of tags to assign to the resource
"""
zookeeper_connect_string: pulumi.Output[str]
"""
A comma separated list of one or more hostname:port pairs to use to connect to the Apache Zookeeper cluster.
"""
def __init__(__self__, resource_name, opts=None, broker_node_group_info=None, client_authentication=None, cluster_name=None, configuration_info=None, encryption_info=None, enhanced_monitoring=None, kafka_version=None, number_of_broker_nodes=None, open_monitoring=None, tags=None, __props__=None, __name__=None, __opts__=None):
"""
Manages AWS Managed Streaming for Kafka cluster
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[dict] broker_node_group_info: Configuration block for the broker nodes of the Kafka cluster.
:param pulumi.Input[dict] client_authentication: Configuration block for specifying a client authentication. See below.
:param pulumi.Input[str] cluster_name: Name of the MSK cluster.
:param pulumi.Input[dict] configuration_info: Configuration block for specifying a MSK Configuration to attach to Kafka brokers. See below.
:param pulumi.Input[dict] encryption_info: Configuration block for specifying encryption. See below.
:param pulumi.Input[str] enhanced_monitoring: Specify the desired enhanced MSK CloudWatch monitoring level. See [Monitoring Amazon MSK with Amazon CloudWatch](https://docs.aws.amazon.com/msk/latest/developerguide/monitoring.html)
:param pulumi.Input[str] kafka_version: Specify the desired Kafka software version.
:param pulumi.Input[float] number_of_broker_nodes: The desired total number of broker nodes in the kafka cluster. It must be a multiple of the number of specified client subnets.
:param pulumi.Input[dict] open_monitoring: Configuration block for JMX and Node monitoring for the MSK cluster. See below.
:param pulumi.Input[dict] tags: A mapping of tags to assign to the resource
The **broker_node_group_info** object supports the following:
* `azDistribution` (`pulumi.Input[str]`) - The distribution of broker nodes across availability zones ([documentation](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#clusters-model-brokerazdistribution)). Currently the only valid value is `DEFAULT`.
* `clientSubnets` (`pulumi.Input[list]`) - A list of subnets to connect to in client VPC ([documentation](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#clusters-prop-brokernodegroupinfo-clientsubnets)).
* `ebsVolumeSize` (`pulumi.Input[float]`) - The size in GiB of the EBS volume for the data drive on each broker node.
* `instance_type` (`pulumi.Input[str]`) - Specify the instance type to use for the kafka brokers. e.g. kafka.m5.large. ([Pricing info](https://aws.amazon.com/msk/pricing/))
* `security_groups` (`pulumi.Input[list]`) - A list of the security groups to associate with the elastic network interfaces to control who can communicate with the cluster.
The **client_authentication** object supports the following:
* `tls` (`pulumi.Input[dict]`) - Configuration block for specifying TLS client authentication. See below.
* `certificateAuthorityArns` (`pulumi.Input[list]`) - List of ACM Certificate Authority Amazon Resource Names (ARNs).
The **configuration_info** object supports the following:
* `arn` (`pulumi.Input[str]`) - Amazon Resource Name (ARN) of the MSK Configuration to use in the cluster.
* `revision` (`pulumi.Input[float]`) - Revision of the MSK Configuration to use in the cluster.
The **encryption_info** object supports the following:
* `encryptionAtRestKmsKeyArn` (`pulumi.Input[str]`) - You may specify a KMS key short ID or ARN (it will always output an ARN) to use for encrypting your data at rest. If no key is specified, an AWS managed KMS ('aws/msk' managed service) key will be used for encrypting the data at rest.
* `encryptionInTransit` (`pulumi.Input[dict]`) - Configuration block to specify encryption in transit. See below.
* `clientBroker` (`pulumi.Input[str]`) - Encryption setting for data in transit between clients and brokers. Valid values: `TLS`, `TLS_PLAINTEXT`, and `PLAINTEXT`. Default value: `TLS_PLAINTEXT`.
* `inCluster` (`pulumi.Input[bool]`) - Whether data communication among broker nodes is encrypted. Default value: `true`.
The **open_monitoring** object supports the following:
* `prometheus` (`pulumi.Input[dict]`) - Configuration block for Prometheus settings for open monitoring. See below.
* `jmxExporter` (`pulumi.Input[dict]`) - Configuration block for JMX Exporter. See below.
* `enabledInBroker` (`pulumi.Input[bool]`) - Indicates whether you want to enable or disable the Node Exporter.
* `nodeExporter` (`pulumi.Input[dict]`) - Configuration block for Node Exporter. See below.
* `enabledInBroker` (`pulumi.Input[bool]`) - Indicates whether you want to enable or disable the Node Exporter.
> This content is derived from https://github.com/terraform-providers/terraform-provider-aws/blob/master/website/docs/r/msk_cluster.html.markdown.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
if broker_node_group_info is None:
raise TypeError("Missing required property 'broker_node_group_info'")
__props__['broker_node_group_info'] = broker_node_group_info
__props__['client_authentication'] = client_authentication
if cluster_name is None:
raise TypeError("Missing required property 'cluster_name'")
__props__['cluster_name'] = cluster_name
__props__['configuration_info'] = configuration_info
__props__['encryption_info'] = encryption_info
__props__['enhanced_monitoring'] = enhanced_monitoring
if kafka_version is None:
raise TypeError("Missing required property 'kafka_version'")
__props__['kafka_version'] = kafka_version
if number_of_broker_nodes is None:
raise TypeError("Missing required property 'number_of_broker_nodes'")
__props__['number_of_broker_nodes'] = number_of_broker_nodes
__props__['open_monitoring'] = open_monitoring
__props__['tags'] = tags
__props__['arn'] = None
__props__['bootstrap_brokers'] = None
__props__['bootstrap_brokers_tls'] = None
__props__['current_version'] = None
__props__['zookeeper_connect_string'] = None
super(Cluster, __self__).__init__(
'aws:msk/cluster:Cluster',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, arn=None, bootstrap_brokers=None, bootstrap_brokers_tls=None, broker_node_group_info=None, client_authentication=None, cluster_name=None, configuration_info=None, current_version=None, encryption_info=None, enhanced_monitoring=None, kafka_version=None, number_of_broker_nodes=None, open_monitoring=None, tags=None, zookeeper_connect_string=None):
"""
Get an existing Cluster resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: Amazon Resource Name (ARN) of the MSK Configuration to use in the cluster.
:param pulumi.Input[str] bootstrap_brokers: A comma separated list of one or more hostname:port pairs of kafka brokers suitable to boostrap connectivity to the kafka cluster. Only contains value if `client_broker` encryption in transit is set to `PLAINTEXT` or `TLS_PLAINTEXT`.
:param pulumi.Input[str] bootstrap_brokers_tls: A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka brokers suitable to boostrap connectivity to the kafka cluster. Only contains value if `client_broker` encryption in transit is set to `TLS_PLAINTEXT` or `TLS`.
:param pulumi.Input[dict] broker_node_group_info: Configuration block for the broker nodes of the Kafka cluster.
:param pulumi.Input[dict] client_authentication: Configuration block for specifying a client authentication. See below.
:param pulumi.Input[str] cluster_name: Name of the MSK cluster.
:param pulumi.Input[dict] configuration_info: Configuration block for specifying a MSK Configuration to attach to Kafka brokers. See below.
:param pulumi.Input[str] current_version: Current version of the MSK Cluster used for updates, e.g. `K13V1IB3VIYZZH`
* `encryption_info.0.encryption_at_rest_kms_key_arn` - The ARN of the KMS key used for encryption at rest of the broker data volumes.
:param pulumi.Input[dict] encryption_info: Configuration block for specifying encryption. See below.
:param pulumi.Input[str] enhanced_monitoring: Specify the desired enhanced MSK CloudWatch monitoring level. See [Monitoring Amazon MSK with Amazon CloudWatch](https://docs.aws.amazon.com/msk/latest/developerguide/monitoring.html)
:param pulumi.Input[str] kafka_version: Specify the desired Kafka software version.
:param pulumi.Input[float] number_of_broker_nodes: The desired total number of broker nodes in the kafka cluster. It must be a multiple of the number of specified client subnets.
:param pulumi.Input[dict] open_monitoring: Configuration block for JMX and Node monitoring for the MSK cluster. See below.
:param pulumi.Input[dict] tags: A mapping of tags to assign to the resource
:param pulumi.Input[str] zookeeper_connect_string: A comma separated list of one or more hostname:port pairs to use to connect to the Apache Zookeeper cluster.
The **broker_node_group_info** object supports the following:
* `azDistribution` (`pulumi.Input[str]`) - The distribution of broker nodes across availability zones ([documentation](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#clusters-model-brokerazdistribution)). Currently the only valid value is `DEFAULT`.
* `clientSubnets` (`pulumi.Input[list]`) - A list of subnets to connect to in client VPC ([documentation](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#clusters-prop-brokernodegroupinfo-clientsubnets)).
* `ebsVolumeSize` (`pulumi.Input[float]`) - The size in GiB of the EBS volume for the data drive on each broker node.
* `instance_type` (`pulumi.Input[str]`) - Specify the instance type to use for the kafka brokers. e.g. kafka.m5.large. ([Pricing info](https://aws.amazon.com/msk/pricing/))
* `security_groups` (`pulumi.Input[list]`) - A list of the security groups to associate with the elastic network interfaces to control who can communicate with the cluster.
The **client_authentication** object supports the following:
* `tls` (`pulumi.Input[dict]`) - Configuration block for specifying TLS client authentication. See below.
* `certificateAuthorityArns` (`pulumi.Input[list]`) - List of ACM Certificate Authority Amazon Resource Names (ARNs).
The **configuration_info** object supports the following:
* `arn` (`pulumi.Input[str]`) - Amazon Resource Name (ARN) of the MSK Configuration to use in the cluster.
* `revision` (`pulumi.Input[float]`) - Revision of the MSK Configuration to use in the cluster.
The **encryption_info** object supports the following:
* `encryptionAtRestKmsKeyArn` (`pulumi.Input[str]`) - You may specify a KMS key short ID or ARN (it will always output an ARN) to use for encrypting your data at rest. If no key is specified, an AWS managed KMS ('aws/msk' managed service) key will be used for encrypting the data at rest.
* `encryptionInTransit` (`pulumi.Input[dict]`) - Configuration block to specify encryption in transit. See below.
* `clientBroker` (`pulumi.Input[str]`) - Encryption setting for data in transit between clients and brokers. Valid values: `TLS`, `TLS_PLAINTEXT`, and `PLAINTEXT`. Default value: `TLS_PLAINTEXT`.
* `inCluster` (`pulumi.Input[bool]`) - Whether data communication among broker nodes is encrypted. Default value: `true`.
The **open_monitoring** object supports the following:
* `prometheus` (`pulumi.Input[dict]`) - Configuration block for Prometheus settings for open monitoring. See below.
* `jmxExporter` (`pulumi.Input[dict]`) - Configuration block for JMX Exporter. See below.
* `enabledInBroker` (`pulumi.Input[bool]`) - Indicates whether you want to enable or disable the Node Exporter.
* `nodeExporter` (`pulumi.Input[dict]`) - Configuration block for Node Exporter. See below.
* `enabledInBroker` (`pulumi.Input[bool]`) - Indicates whether you want to enable or disable the Node Exporter.
> This content is derived from https://github.com/terraform-providers/terraform-provider-aws/blob/master/website/docs/r/msk_cluster.html.markdown.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["arn"] = arn
__props__["bootstrap_brokers"] = bootstrap_brokers
__props__["bootstrap_brokers_tls"] = bootstrap_brokers_tls
__props__["broker_node_group_info"] = broker_node_group_info
__props__["client_authentication"] = client_authentication
__props__["cluster_name"] = cluster_name
__props__["configuration_info"] = configuration_info
__props__["current_version"] = current_version
__props__["encryption_info"] = encryption_info
__props__["enhanced_monitoring"] = enhanced_monitoring
__props__["kafka_version"] = kafka_version
__props__["number_of_broker_nodes"] = number_of_broker_nodes
__props__["open_monitoring"] = open_monitoring
__props__["tags"] = tags
__props__["zookeeper_connect_string"] = zookeeper_connect_string
return Cluster(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 70.662207 | 388 | 0.702953 | 2,688 | 21,128 | 5.361979 | 0.106027 | 0.046555 | 0.039339 | 0.029487 | 0.854298 | 0.839242 | 0.82384 | 0.809269 | 0.798515 | 0.787969 | 0 | 0.001559 | 0.210432 | 21,128 | 298 | 389 | 70.899329 | 0.862427 | 0.49593 | 0 | 0.021978 | 1 | 0 | 0.178994 | 0.055471 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043956 | false | 0.010989 | 0.065934 | 0.021978 | 0.318681 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
715af5b054ead30762d3667d55d322d1547277c8 | 6,848 | py | Python | ellipticals/elliptical_functions.py | kadglass/RotationCurves | 7f52dac022ca4df666e671eeb7f0aaf65c8515c3 | [
"BSD-3-Clause"
] | 2 | 2020-03-20T09:43:57.000Z | 2020-05-27T23:50:38.000Z | ellipticals/elliptical_functions.py | kadglass/RotationCurves | 7f52dac022ca4df666e671eeb7f0aaf65c8515c3 | [
"BSD-3-Clause"
] | 1 | 2021-09-14T07:16:09.000Z | 2021-09-14T07:16:09.000Z | ellipticals/elliptical_functions.py | kadglass/RotationCurves | 7f52dac022ca4df666e671eeb7f0aaf65c8515c3 | [
"BSD-3-Clause"
] | 3 | 2021-09-13T18:22:03.000Z | 2022-02-08T20:00:24.000Z | ################################################################################
# IMPORT MODULES
#-------------------------------------------------------------------------------
from parse_data import build_galaxy_IDs
from analyze_data import plot_FundamentalPlane, plot_FaberJackson, determine_masses
from IO_data import write_masses
################################################################################
################################################################################
#-------------------------------------------------------------------------------
def FundamentalPlane(galaxy_ID,
data_directory,
master_filename,
sigma_type,
save_fig=False):
'''
Plot the Fundamental Plane for the galaxies in galaxy_ID.
PARAMETERS
==========
galaxy_ID : string
Either the plate-IFU identification for a particular MaNGA galaxy to be
analyzed, or 'all'. If 'all', then all elliptical galaxies should be
analyzed.
data_directory : string
Location of the data stored on the local computer.
master_filename : string
File name of the master table. This table is a list of all the MaNGA
galaxies, along with their associated NSA data and other parameters
previously calculated by us.
sigma_type : string
Location / type of velocity dispersion. Options include:
- 'median' : returns the median value of the velocity dispersion map
- 'central' : returns the central value of the velocity dispersion map
save_fig : boolean
Determines wether or not to save the figure. Default is False (do not
save).
'''
############################################################################
# Create list of tuples of elliptical galaxy ID(s) to be analyzed
#---------------------------------------------------------------------------
elliptical_IDs = build_galaxy_IDs(galaxy_ID, master_filename)
############################################################################
############################################################################
# Plot the Fundamental Plane
#---------------------------------------------------------------------------
plot_FundamentalPlane(elliptical_IDs, data_directory, sigma_type, save_fig)
############################################################################
################################################################################
################################################################################
#-------------------------------------------------------------------------------
def FaberJackson(galaxy_ID,
data_directory,
master_filename,
sigma_type,
save_fig=False):
'''
Plot the Faber-Jackson relation for the galaxies in galaxy_ID.
PARAMETERS
==========
galaxy_ID : string
Either the plate-IFU identification for a particular MaNGA galaxy to be
analyzed, or 'all'. If 'all', then all elliptical galaxies should be
analyzed.
data_directory : string
Location of the data stored on the local computer.
master_filename : string
File name of the master table. This table is a list of all the MaNGA
galaxies, along with their associated NSA data and other parameters
previously calculated by us.
sigma_type : string
Location / type of velocity dispersion. Options include:
- 'median' : returns the median value of the velocity dispersion map
- 'central' : returns the central value of the velocity dispersion map
save_fig : boolean
Determines wether or not to save the figure. Default is False (do not
save).
'''
############################################################################
# Create list of tuples of elliptical galaxy ID(s) to be analyzed
#---------------------------------------------------------------------------
elliptical_IDs = build_galaxy_IDs(galaxy_ID, master_filename)
############################################################################
############################################################################
# Plot the Faber-Jackson relation
#---------------------------------------------------------------------------
plot_FaberJackson(elliptical_IDs, data_directory, sigma_type, save_fig)
############################################################################
################################################################################
################################################################################
#-------------------------------------------------------------------------------
def elliptical_masses(galaxy_ID, data_directory, master_filename):
'''
Parse through the data files and calculate the mass of elliptical galaxies.
PARAMETERS
==========
galaxy_ID : string
Either the plate-IFU identification for a particular MaNGA galaxy to be
analyzed, or 'all'. If 'all', then all elliptical galaxies should be
analyzed.
data_directory : string
Location of the data stored on the local computer.
master_filename : string
File name of the master table. This table is a list of all the MaNGA
galaxies, along with their associated NSA data and other parameters
previously calculated by us.
'''
############################################################################
# Create list of tuples of elliptical galaxy ID(s) to be analyzed
#---------------------------------------------------------------------------
elliptical_IDs = build_galaxy_IDs(galaxy_ID, master_filename)
############################################################################
############################################################################
# Calculate mass for each galaxy
#---------------------------------------------------------------------------
elliptical_masses = determine_masses(elliptical_IDs, data_directory)
############################################################################
############################################################################
# Add masses to value-added catalog (master file)
#---------------------------------------------------------------------------
if galaxy_ID == 'all':
write_masses(elliptical_masses, elliptical_IDs, master_filename)
else:
print(elliptical_masses)
############################################################################
################################################################################ | 40.52071 | 83 | 0.411361 | 532 | 6,848 | 5.157895 | 0.197368 | 0.043732 | 0.026239 | 0.023324 | 0.804665 | 0.790087 | 0.777332 | 0.777332 | 0.777332 | 0.744534 | 0 | 0 | 0.184287 | 6,848 | 169 | 84 | 40.52071 | 0.491228 | 0.518254 | 0 | 0.458333 | 0 | 0 | 0.002252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
715c2456da507b456c6bfbe60c4b3d99d0b9366f | 6,712 | py | Python | stubs.min/System/Windows/Forms/__init___parts/RadioButtonRenderer.py | ricardyn/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | 1 | 2021-02-02T13:39:16.000Z | 2021-02-02T13:39:16.000Z | stubs.min/System/Windows/Forms/__init___parts/RadioButtonRenderer.py | hdm-dt-fb/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | null | null | null | stubs.min/System/Windows/Forms/__init___parts/RadioButtonRenderer.py | hdm-dt-fb/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | null | null | null | class RadioButtonRenderer(object):
""" Provides methods used to render an option button control (also known as a radio button) with or without visual styles. This class cannot be inherited. """
@staticmethod
def DrawParentBackground(g,bounds,childControl):
"""
DrawParentBackground(g: Graphics,bounds: Rectangle,childControl: Control)
Draws the background of a control's parent in the specified area.
g: The System.Drawing.Graphics used to draw the background of the parent of
childControl.
bounds: The System.Drawing.Rectangle in which to draw the parent control's background.
childControl: The control whose parent's background will be drawn.
"""
pass
@staticmethod
def DrawRadioButton(g,glyphLocation,*__args):
"""
DrawRadioButton(g: Graphics,glyphLocation: Point,textBounds: Rectangle,radioButtonText: str,font: Font,image: Image,imageBounds: Rectangle,focused: bool,state: RadioButtonState)
Draws an option button control (also known as a radio button) in the specified
state and location,with the specified text and image,and with an optional
focus rectangle.
g: The System.Drawing.Graphics used to draw the option button.
glyphLocation: The System.Drawing.Point to draw the option button glyph at.
textBounds: The System.Drawing.Rectangle to draw radioButtonText in.
radioButtonText: The System.String to draw with the option button.
font: The System.Drawing.Font to apply to radioButtonText.
image: The System.Drawing.Image to draw with the option button.
imageBounds: The System.Drawing.Rectangle to draw image in.
focused: true to draw a focus rectangle; otherwise,false.
state: One of the System.Windows.Forms.VisualStyles.RadioButtonState values that
specifies the visual state of the option button.
DrawRadioButton(g: Graphics,glyphLocation: Point,textBounds: Rectangle,radioButtonText: str,font: Font,flags: TextFormatFlags,image: Image,imageBounds: Rectangle,focused: bool,state: RadioButtonState)
Draws an option button control (also known as a radio button) in the specified
state and location; with the specified text,text formatting,and image; and
with an optional focus rectangle.
g: The System.Drawing.Graphics used to draw the option button.
glyphLocation: The System.Drawing.Point to draw the option button glyph at.
textBounds: The System.Drawing.Rectangle to draw radioButtonText in.
radioButtonText: The System.String to draw with the option button.
font: The System.Drawing.Font to apply to radioButtonText.
flags: A bitwise combination of the System.Windows.Forms.TextFormatFlags values.
image: The System.Drawing.Image to draw with the option button.
imageBounds: The System.Drawing.Rectangle to draw image in.
focused: true to draw a focus rectangle; otherwise,false.
state: One of the System.Windows.Forms.VisualStyles.RadioButtonState values that
specifies the visual state of the option button.
DrawRadioButton(g: Graphics,glyphLocation: Point,textBounds: Rectangle,radioButtonText: str,font: Font,flags: TextFormatFlags,focused: bool,state: RadioButtonState)
Draws an option button control (also known as a radio button) in the specified
state and location,with the specified text and text formatting,and with an
optional focus rectangle.
g: The System.Drawing.Graphics used to draw the option button.
glyphLocation: The System.Drawing.Point to draw the option button glyph at.
textBounds: The System.Drawing.Rectangle to draw radioButtonText in.
radioButtonText: The System.String to draw with the option button.
font: The System.Drawing.Font to apply to radioButtonText.
flags: A bitwise combination of the System.Windows.Forms.TextFormatFlags values.
focused: true to draw a focus rectangle; otherwise,false.
state: One of the System.Windows.Forms.VisualStyles.RadioButtonState values that
specifies the visual state of the option button.
DrawRadioButton(g: Graphics,glyphLocation: Point,state: RadioButtonState)
Draws an option button control (also known as a radio button) in the specified
state and location.
g: The System.Drawing.Graphics used to draw the option button.
glyphLocation: The System.Drawing.Point to draw the option button glyph at.
state: One of the System.Windows.Forms.VisualStyles.RadioButtonState values that
specifies the visual state of the option button.
DrawRadioButton(g: Graphics,glyphLocation: Point,textBounds: Rectangle,radioButtonText: str,font: Font,focused: bool,state: RadioButtonState)
Draws an option button control (also known as a radio button) in the specified
state and location,with the specified text,and with an optional focus
rectangle.
g: The System.Drawing.Graphics used to draw the option button.
glyphLocation: The System.Drawing.Point to draw the option button glyph at.
textBounds: The System.Drawing.Rectangle to draw radioButtonText in.
radioButtonText: The System.String to draw with the option button.
font: The System.Drawing.Font to apply to radioButtonText.
focused: true to draw a focus rectangle; otherwise,false.
state: One of the System.Windows.Forms.VisualStyles.RadioButtonState values that
specifies the visual state of the option button.
"""
pass
@staticmethod
def GetGlyphSize(g,state):
"""
GetGlyphSize(g: Graphics,state: RadioButtonState) -> Size
Returns the size,in pixels,of the option button (also known as a radio
button) glyph.
g: The System.Drawing.Graphics used to draw the option button.
state: One of the System.Windows.Forms.VisualStyles.RadioButtonState values that
specifies the visual state of the option button.
Returns: A System.Drawing.Size that represents the size,in pixels,of the option button
glyph.
"""
pass
@staticmethod
def IsBackgroundPartiallyTransparent(state):
"""
IsBackgroundPartiallyTransparent(state: RadioButtonState) -> bool
Indicates whether the background of the option button (also known as a radio
button) has semitransparent or alpha-blended pieces.
state: One of the System.Windows.Forms.VisualStyles.RadioButtonState values that
specifies the visual state of the option button.
Returns: true if the background of the option button has semitransparent or
alpha-blended pieces; otherwise,false.
"""
pass
RenderMatchingApplicationState=True
| 50.848485 | 204 | 0.741359 | 874 | 6,712 | 5.691076 | 0.118993 | 0.068758 | 0.084439 | 0.033173 | 0.831524 | 0.831524 | 0.824085 | 0.8076 | 0.79815 | 0.790511 | 0 | 0 | 0.200685 | 6,712 | 131 | 205 | 51.236641 | 0.92712 | 0.890644 | 0 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.285714 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
71ac7638e104d6e0de9167b2e8187567d94e8d34 | 102 | py | Python | src/machine_learning_providers/providers/__init__.py | DIS-SIN/Asgard | c281f829bba61ded101e1499295828420044da5c | [
"MIT"
] | null | null | null | src/machine_learning_providers/providers/__init__.py | DIS-SIN/Asgard | c281f829bba61ded101e1499295828420044da5c | [
"MIT"
] | 2 | 2021-04-30T20:55:36.000Z | 2021-06-02T00:19:50.000Z | src/machine_learning_providers/providers/__init__.py | DIS-SIN/Asgard | c281f829bba61ded101e1499295828420044da5c | [
"MIT"
] | null | null | null | from .base_provider import MLRegistry
from .base_provider import MLProvider
from .google import Google | 34 | 37 | 0.862745 | 14 | 102 | 6.142857 | 0.5 | 0.186047 | 0.372093 | 0.511628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107843 | 102 | 3 | 38 | 34 | 0.945055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
71c8aed25ba444de42fb58b65e2513e6a32c7cdf | 2,545 | py | Python | osf/migrations/0137_auto_20181012_1756.py | gaybro8777/osf.io | 30408511510a40bc393565817b343ef5fd76ab14 | [
"Apache-2.0"
] | 628 | 2015-01-15T04:33:22.000Z | 2022-03-30T06:40:10.000Z | osf/migrations/0137_auto_20181012_1756.py | gaybro8777/osf.io | 30408511510a40bc393565817b343ef5fd76ab14 | [
"Apache-2.0"
] | 4,712 | 2015-01-02T01:41:53.000Z | 2022-03-30T14:18:40.000Z | osf/migrations/0137_auto_20181012_1756.py | Johnetordoff/osf.io | de10bf249c46cede04c78f7e6f7e352c69e6e6b5 | [
"Apache-2.0"
] | 371 | 2015-01-12T16:14:08.000Z | 2022-03-31T18:58:29.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.13 on 2018-10-12 17:56
from __future__ import unicode_literals
import django.contrib.postgres.fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('osf', '0136_merge_20181010_2242'),
]
operations = [
migrations.AlterField(
model_name='collection',
name='collected_type_choices',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=127), blank=True, default=list, size=None),
),
migrations.AlterField(
model_name='collection',
name='issue_choices',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=127), blank=True, default=list, size=None),
),
migrations.AlterField(
model_name='collection',
name='program_area_choices',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=127), blank=True, default=list, size=None),
),
migrations.AlterField(
model_name='collection',
name='status_choices',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=127), blank=True, default=list, size=None),
),
migrations.AlterField(
model_name='collection',
name='volume_choices',
field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=127), blank=True, default=list, size=None),
),
migrations.AlterField(
model_name='collectionsubmission',
name='collected_type',
field=models.CharField(blank=True, max_length=127),
),
migrations.AlterField(
model_name='collectionsubmission',
name='issue',
field=models.CharField(blank=True, max_length=127),
),
migrations.AlterField(
model_name='collectionsubmission',
name='program_area',
field=models.CharField(blank=True, max_length=127),
),
migrations.AlterField(
model_name='collectionsubmission',
name='status',
field=models.CharField(blank=True, max_length=127),
),
migrations.AlterField(
model_name='collectionsubmission',
name='volume',
field=models.CharField(blank=True, max_length=127),
),
]
| 37.985075 | 142 | 0.627505 | 256 | 2,545 | 6.074219 | 0.238281 | 0.128617 | 0.160772 | 0.186495 | 0.803859 | 0.803859 | 0.760772 | 0.760772 | 0.734405 | 0.734405 | 0 | 0.033862 | 0.257367 | 2,545 | 66 | 143 | 38.560606 | 0.788889 | 0.027112 | 0 | 0.677966 | 1 | 0 | 0.122523 | 0.018601 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.050847 | 0 | 0.101695 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e07aec379e02a2ee00c34d084f02aaef1852439d | 7,997 | py | Python | napari/tests/test_viewer.py | imagejan/napari | 29975d805af24379671a64d2279823386be239f7 | [
"BSD-3-Clause"
] | null | null | null | napari/tests/test_viewer.py | imagejan/napari | 29975d805af24379671a64d2279823386be239f7 | [
"BSD-3-Clause"
] | null | null | null | napari/tests/test_viewer.py | imagejan/napari | 29975d805af24379671a64d2279823386be239f7 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
from napari import Viewer
def test_viewer(qtbot):
"""Test instantiating viewer."""
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
assert viewer.title == 'napari'
assert view.viewer == viewer
assert len(viewer.layers) == 0
assert view.layers.vbox_layout.count() == 2
assert viewer.dims.ndim == 2
assert view.dims.nsliders == viewer.dims.ndim
assert np.sum(view.dims._displayed_sliders) == 0
# Switch to 3D rendering mode and back to 2D rendering mode
viewer.dims.ndisplay = 3
assert viewer.dims.ndisplay == 3
viewer.dims.ndisplay = 2
assert viewer.dims.ndisplay == 2
# Close the viewer
viewer.window.close()
def test_add_image(qtbot):
"""Test adding image."""
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
np.random.seed(0)
data = np.random.random((10, 15))
viewer.add_image(data)
assert np.all(viewer.layers[0].data == data)
assert len(viewer.layers) == 1
assert view.layers.vbox_layout.count() == 2 * len(viewer.layers) + 2
assert viewer.dims.ndim == 2
assert view.dims.nsliders == viewer.dims.ndim
assert np.sum(view.dims._displayed_sliders) == 0
# Switch to 3D rendering mode and back to 2D rendering mode
viewer.dims.ndisplay = 3
assert viewer.dims.ndisplay == 3
viewer.dims.ndisplay = 2
assert viewer.dims.ndisplay == 2
# Close the viewer
viewer.window.close()
def test_add_volume(qtbot):
"""Test adding volume."""
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
np.random.seed(0)
data = np.random.random((10, 15, 20))
viewer.add_image(data)
viewer.dims.ndisplay = 3
assert np.all(viewer.layers[0].data == data)
assert len(viewer.layers) == 1
assert view.layers.vbox_layout.count() == 2 * len(viewer.layers) + 2
assert viewer.dims.ndim == 3
assert view.dims.nsliders == viewer.dims.ndim
assert np.sum(view.dims._displayed_sliders) == 0
# Switch to 3D rendering mode and back to 2D rendering mode
viewer.dims.ndisplay = 3
assert viewer.dims.ndisplay == 3
viewer.dims.ndisplay = 2
assert viewer.dims.ndisplay == 2
# Close the viewer
viewer.window.close()
def test_add_pyramid(qtbot):
"""Test adding image pyramid."""
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
shapes = [(40, 20), (20, 10), (10, 5)]
np.random.seed(0)
data = [np.random.random(s) for s in shapes]
viewer.add_image(data, is_pyramid=True)
assert np.all(viewer.layers[0].data == data)
assert len(viewer.layers) == 1
assert view.layers.vbox_layout.count() == 2 * len(viewer.layers) + 2
assert viewer.dims.ndim == 2
assert view.dims.nsliders == viewer.dims.ndim
assert np.sum(view.dims._displayed_sliders) == 0
# Switch to 3D rendering mode and back to 2D rendering mode
viewer.dims.ndisplay = 3
assert viewer.dims.ndisplay == 3
viewer.dims.ndisplay = 2
assert viewer.dims.ndisplay == 2
# Close the viewer
viewer.window.close()
def test_add_labels(qtbot):
"""Test adding labels image."""
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
np.random.seed(0)
data = np.random.randint(20, size=(10, 15))
viewer.add_labels(data)
assert np.all(viewer.layers[0].data == data)
assert len(viewer.layers) == 1
assert view.layers.vbox_layout.count() == 2 * len(viewer.layers) + 2
assert viewer.dims.ndim == 2
assert view.dims.nsliders == viewer.dims.ndim
assert np.sum(view.dims._displayed_sliders) == 0
# Switch to 3D rendering mode and back to 2D rendering mode
viewer.dims.ndisplay = 3
assert viewer.dims.ndisplay == 3
viewer.dims.ndisplay = 2
assert viewer.dims.ndisplay == 2
# Close the viewer
viewer.window.close()
def test_add_points(qtbot):
"""Test adding points."""
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
np.random.seed(0)
data = 20 * np.random.random((10, 2))
viewer.add_points(data)
assert np.all(viewer.layers[0].data == data)
assert len(viewer.layers) == 1
assert view.layers.vbox_layout.count() == 2 * len(viewer.layers) + 2
assert viewer.dims.ndim == 2
assert view.dims.nsliders == viewer.dims.ndim
assert np.sum(view.dims._displayed_sliders) == 0
# Switch to 3D rendering mode and back to 2D rendering mode
viewer.dims.ndisplay = 3
assert viewer.dims.ndisplay == 3
viewer.dims.ndisplay = 2
assert viewer.dims.ndisplay == 2
# Close the viewer
viewer.window.close()
def test_add_vectors(qtbot):
"""Test adding vectors."""
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
np.random.seed(0)
data = 20 * np.random.random((10, 2, 2))
viewer.add_vectors(data)
assert np.all(viewer.layers[0].data == data)
assert len(viewer.layers) == 1
assert view.layers.vbox_layout.count() == 2 * len(viewer.layers) + 2
assert viewer.dims.ndim == 2
assert view.dims.nsliders == viewer.dims.ndim
assert np.sum(view.dims._displayed_sliders) == 0
# Switch to 3D rendering mode and back to 2D rendering mode
viewer.dims.ndisplay = 3
assert viewer.dims.ndisplay == 3
viewer.dims.ndisplay = 2
assert viewer.dims.ndisplay == 2
# Close the viewer
viewer.window.close()
def test_add_shapes(qtbot):
"""Test adding shapes."""
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
np.random.seed(0)
data = 20 * np.random.random((10, 4, 2))
viewer.add_shapes(data)
assert np.all(viewer.layers[0].data == data)
assert len(viewer.layers) == 1
assert view.layers.vbox_layout.count() == 2 * len(viewer.layers) + 2
assert viewer.dims.ndim == 2
assert view.dims.nsliders == viewer.dims.ndim
assert np.sum(view.dims._displayed_sliders) == 0
# Switch to 3D rendering mode and back to 2D rendering mode
viewer.dims.ndisplay = 3
assert viewer.dims.ndisplay == 3
viewer.dims.ndisplay = 2
assert viewer.dims.ndisplay == 2
# Close the viewer
viewer.window.close()
def test_add_surface(qtbot):
"""Test adding 3D surface."""
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
np.random.seed(0)
vertices = np.random.random((10, 3))
faces = np.random.randint(10, size=(6, 3))
values = np.random.random(10)
data = (vertices, faces, values)
viewer.add_surface(data)
assert np.all(
[np.all(vd == d) for vd, d in zip(viewer.layers[0].data, data)]
)
assert len(viewer.layers) == 1
assert view.layers.vbox_layout.count() == 2 * len(viewer.layers) + 2
assert viewer.dims.ndim == 3
assert view.dims.nsliders == viewer.dims.ndim
assert np.sum(view.dims._displayed_sliders) == 1
# Switch to 3D rendering mode and back to 2D rendering mode
viewer.dims.ndisplay = 3
assert viewer.dims.ndisplay == 3
viewer.dims.ndisplay = 2
assert viewer.dims.ndisplay == 2
# Close the viewer
viewer.window.close()
def test_screenshot(qtbot):
"Test taking a screenshot"
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
np.random.seed(0)
# Add image
data = np.random.random((10, 15))
viewer.add_image(data)
# Add labels
data = np.random.randint(20, size=(10, 15))
viewer.add_labels(data)
# Add points
data = 20 * np.random.random((10, 2))
viewer.add_points(data)
# Add vectors
data = 20 * np.random.random((10, 2, 2))
viewer.add_vectors(data)
# Add shapes
data = 20 * np.random.random((10, 4, 2))
viewer.add_shapes(data)
# Take screenshot
screenshot = viewer.screenshot()
assert screenshot.ndim == 3
# Close the viewer
viewer.window.close()
| 27.016892 | 72 | 0.656871 | 1,146 | 7,997 | 4.52356 | 0.070681 | 0.106096 | 0.128472 | 0.069637 | 0.857446 | 0.853781 | 0.847801 | 0.841821 | 0.835841 | 0.835841 | 0 | 0.03164 | 0.217457 | 7,997 | 295 | 73 | 27.108475 | 0.79674 | 0.124047 | 0 | 0.829787 | 0 | 0 | 0.00431 | 0 | 0 | 0 | 0 | 0 | 0.393617 | 1 | 0.053191 | false | 0 | 0.010638 | 0 | 0.06383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e089442742a2aaf6c6c1d7b0295ad2e4d6cc8705 | 111 | py | Python | tests/test_hello.py | novocaine/rust-python-ext | 55411cfb095e382b1991a9e27da65f4eb6c61deb | [
"MIT"
] | 84 | 2015-05-31T18:17:31.000Z | 2021-05-17T13:22:29.000Z | tests/test_hello.py | novocaine/rust-python-ext | 55411cfb095e382b1991a9e27da65f4eb6c61deb | [
"MIT"
] | 12 | 2015-10-16T13:20:36.000Z | 2017-03-18T18:43:15.000Z | tests/test_hello.py | novocaine/rust-python-ext | 55411cfb095e382b1991a9e27da65f4eb6c61deb | [
"MIT"
] | 7 | 2015-08-08T12:46:48.000Z | 2019-01-26T04:00:31.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
def test_hello():
import hello_rust
hello_rust.hello()
| 13.875 | 23 | 0.621622 | 16 | 111 | 4.125 | 0.75 | 0.272727 | 0.424242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.198198 | 111 | 7 | 24 | 15.857143 | 0.730337 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
e0c340ff3e336a81f16e94f28620aa0f50454c05 | 185 | py | Python | openrec/tf1/legacy/modules/fusions/__init__.py | pbaiz/openrec | a00de2345844858194ef43ab6845342114a5be93 | [
"Apache-2.0"
] | 399 | 2018-01-04T15:24:02.000Z | 2022-03-31T09:39:05.000Z | openrec/tf1/legacy/modules/fusions/__init__.py | pbaiz/openrec | a00de2345844858194ef43ab6845342114a5be93 | [
"Apache-2.0"
] | 26 | 2018-01-14T04:01:28.000Z | 2022-02-09T23:36:32.000Z | openrec/tf1/legacy/modules/fusions/__init__.py | pbaiz/openrec | a00de2345844858194ef43ab6845342114a5be93 | [
"Apache-2.0"
] | 97 | 2017-12-22T07:07:35.000Z | 2022-01-24T19:04:02.000Z | from openrec.tf1.legacy.modules.fusions.fusion import Fusion
from openrec.tf1.legacy.modules.fusions.concat import Concat
from openrec.tf1.legacy.modules.fusions.average import Average
| 46.25 | 62 | 0.854054 | 27 | 185 | 5.851852 | 0.37037 | 0.208861 | 0.265823 | 0.379747 | 0.64557 | 0.64557 | 0 | 0 | 0 | 0 | 0 | 0.017341 | 0.064865 | 185 | 3 | 63 | 61.666667 | 0.895954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
e0f1dfd72ef063dbef9d16a92179b08a759a0745 | 21,940 | py | Python | qregexeditor/api/forms/quick_ref_ui.py | ColinDuquesnoy/QRegexEditor | cb0e67821c773ea5d81511c32b1b30094d39f33a | [
"MIT"
] | 6 | 2016-01-24T07:35:28.000Z | 2021-03-18T12:26:10.000Z | qregexeditor/api/forms/quick_ref_ui.py | rrosajp/QRegexEditor | cb0e67821c773ea5d81511c32b1b30094d39f33a | [
"MIT"
] | 1 | 2019-02-05T17:35:18.000Z | 2019-02-05T17:35:18.000Z | qregexeditor/api/forms/quick_ref_ui.py | rrosajp/QRegexEditor | cb0e67821c773ea5d81511c32b1b30094d39f33a | [
"MIT"
] | 7 | 2017-12-26T03:36:55.000Z | 2020-12-10T07:12:48.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'C:\Dev\QRegexEditor\forms/quick_ref.ui'
#
# Created by: PyQt5 UI code generator 5.5.1
#
# WARNING! All changes made in this file will be lost!
from qregexeditor.qt import QtCore, QtGui, QtWidgets
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(608, 353)
self.gridLayout = QtWidgets.QGridLayout(Form)
self.gridLayout.setObjectName("gridLayout")
self.textEditQuickRef = QtWidgets.QTextEdit(Form)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.textEditQuickRef.sizePolicy().hasHeightForWidth())
self.textEditQuickRef.setSizePolicy(sizePolicy)
self.textEditQuickRef.setReadOnly(True)
self.textEditQuickRef.setObjectName("textEditQuickRef")
self.gridLayout.addWidget(self.textEditQuickRef, 0, 0, 1, 1)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
self.textEditQuickRef.setHtml(_translate("Form", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n"
"<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n"
"p, li { white-space: pre-wrap; }\n"
"</style></head><body style=\" font-family:\'Sans Serif\'; font-size:9pt; font-weight:400; font-style:normal;\">\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-size:large; font-weight:600; text-decoration: underline;\">Special characters</span></p>\n"
"<p style=\"-qt-paragraph-type:empty; margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><br /></p>\n"
"<table border=\"0\" style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px;\" cellspacing=\"2\" cellpadding=\"0\">\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">escape special characters</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">.</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches any character</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">^</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches beginning of string</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">$</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches end of string</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">[5b-d]</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches any chars \'5\', \'b\', \'c\' or \'d\'</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">[^a-c6]</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches any char except \'a\', \'b\', \'c\' or \'6\'</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">R|S</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches either regex </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">R</span><span style=\" font-style:italic;\"> or regex </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">S</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">()</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">creates a capture group and indicates precedence</span></p></td></tr></table>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-size:large; font-weight:600; text-decoration: underline;\">Quantifiers</span></p>\n"
"<p style=\"-qt-paragraph-type:empty; margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><br /></p>\n"
"<table border=\"0\" style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px;\" cellspacing=\"2\" cellpadding=\"0\">\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">*</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">0 or more (append </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">?</span><span style=\" font-style:italic;\"> for non-greedy)</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">+</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">1 or more (append </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">?</span><span style=\" font-style:italic;\"> for non-greedy)</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">?</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">0 or 1 (append </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">?</span><span style=\" font-style:italic;\"> for non-greedy)</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">{m}</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">exactly </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">m</span><span style=\" font-style:italic;\">m occurrences</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">{m, n}</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">from </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">m</span><span style=\" font-style:italic;\"> to </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">n</span><span style=\" font-style:italic;\">. </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">m</span><span style=\" font-style:italic;\"> defaults to 0, </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">n</span><span style=\" font-style:italic;\"> to infinity</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">{m, n}?</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">from </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">m</span><span style=\" font-style:italic;\"> to </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">n</span><span style=\" font-style:italic;\">, as few as possible</span></p></td></tr></table>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-size:large; font-weight:600; text-decoration: underline;\">Special sequences</span></p>\n"
"<p style=\"-qt-paragraph-type:empty; margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><br /></p>\n"
"<table border=\"0\" style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px;\" cellspacing=\"2\" cellpadding=\"0\">\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\A</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">start of string</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\b</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches empty string at word boundary (between </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">\\w</span><span style=\" font-style:italic;\"> and </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">\\W</span><span style=\" font-style:italic;\">)</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\B</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches empty string not at word boundary</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\d</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">digit</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\D</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">non-digit</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\s</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">whitespace: </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">[ \\t\\n\\r\\f\\v]</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\S</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">non-whitespace</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\w</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">alphanumeric: </span><span style=\" font-family:\'Courier New,courier\'; font-style:italic;\">[0-9a-zA-Z_]</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\W</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">non-alphanumeric</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\Z</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">end of string</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">\\g<id></span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches a previously defined group</span></p></td></tr></table>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-size:large; font-weight:600; text-decoration: underline;\">Special sequences</span></p>\n"
"<p style=\"-qt-paragraph-type:empty; margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><br /></p>\n"
"<table border=\"0\" style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px;\" cellspacing=\"2\" cellpadding=\"0\">\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?iLmsux)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches empty string, sets re.X flags</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?:...)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">non-capturing version of regular parentheses</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?P...)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">matches whatever matched previously named group</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?P=)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">digit</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?#...)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">a comment; ignored</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?=...)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">lookahead assertion: matches without consuming</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?!...)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">negative lookahead assertion</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?<=...)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">lookbehind assertion: matches if preceded</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?<!...)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">negative lookbehind assertion</span></p></td></tr>\n"
"<tr>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-family:\'Courier New,courier\'; font-weight:600;\">(?(id)yes|no)</span></p></td>\n"
"<td>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-style:italic;\">match \'yes\' if group \'id\' matched, else \'no\'</span></p></td></tr></table></body></html>"))
| 97.946429 | 719 | 0.665634 | 3,532 | 21,940 | 4.133069 | 0.065402 | 0.106864 | 0.091725 | 0.106727 | 0.87923 | 0.877586 | 0.875668 | 0.875531 | 0.875531 | 0.875531 | 0 | 0.039517 | 0.068049 | 21,940 | 223 | 720 | 98.38565 | 0.674427 | 0.00948 | 0 | 0.551887 | 1 | 0.018868 | 0.343245 | 0.088239 | 0 | 0 | 0 | 0 | 0.018868 | 1 | 0.009434 | false | 0 | 0.004717 | 0 | 0.018868 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e0f624fe2b1236c529ccda7db1c740655275c42d | 358 | py | Python | CPAC/easy_thresh/__init__.py | Lawreros/C-PAC | ce26ba9a38cbd401cd405150eeed23b805007724 | [
"BSD-3-Clause"
] | 125 | 2015-03-04T09:14:46.000Z | 2022-03-29T07:46:12.000Z | CPAC/easy_thresh/__init__.py | Lawreros/C-PAC | ce26ba9a38cbd401cd405150eeed23b805007724 | [
"BSD-3-Clause"
] | 1,018 | 2015-01-04T16:01:29.000Z | 2022-03-31T19:23:09.000Z | CPAC/easy_thresh/__init__.py | Lawreros/C-PAC | ce26ba9a38cbd401cd405150eeed23b805007724 | [
"BSD-3-Clause"
] | 117 | 2015-01-10T08:05:52.000Z | 2022-01-18T05:16:51.000Z | from .easy_thresh import easy_thresh, \
copy_geom, \
get_standard_background_img, \
get_tuple, \
call_cluster
__all__ = ['easy_thresh', \
'copy_geom', \
'get_standard_background_img', \
'get_tuple', \
'call_cluster']
| 29.833333 | 54 | 0.458101 | 29 | 358 | 5 | 0.482759 | 0.206897 | 0.193103 | 0.248276 | 0.841379 | 0.841379 | 0.841379 | 0.841379 | 0.841379 | 0.841379 | 0 | 0 | 0.458101 | 358 | 11 | 55 | 32.545455 | 0.747423 | 0 | 0 | 0 | 0 | 0 | 0.189944 | 0.075419 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
460d28723722322a635356f126f335e348edbb77 | 448 | py | Python | src/mykrobe/typing/__init__.py | chamilaadikaram/mykrobe | 2bcebf7b37f1c1416f397374da6ebfd02ce1aead | [
"MIT"
] | 1 | 2020-01-10T06:43:22.000Z | 2020-01-10T06:43:22.000Z | src/mykrobe/typing/__init__.py | chamilaadikaram/mykrobe | 2bcebf7b37f1c1416f397374da6ebfd02ce1aead | [
"MIT"
] | null | null | null | src/mykrobe/typing/__init__.py | chamilaadikaram/mykrobe | 2bcebf7b37f1c1416f397374da6ebfd02ce1aead | [
"MIT"
] | null | null | null | from mykrobe.typing.models.base import ProbeCoverage
from mykrobe.typing.models.presence import SequenceProbeCoverage
from mykrobe.typing.models.panel import Panel
from mykrobe.typing.models.variant import VariantProbeCoverage
from mykrobe.typing.typer.presence import PresenceTyper
from mykrobe.typing.typer.variant import VariantTyper
from mykrobe.typing.typer.genotyper import Genotyper
from mykrobe.typing.typer.genotyper import CoverageParser
| 49.777778 | 64 | 0.875 | 56 | 448 | 7 | 0.303571 | 0.22449 | 0.346939 | 0.234694 | 0.188776 | 0.188776 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 448 | 8 | 65 | 56 | 0.942308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1ca39302f94c4c44df85b190ff9e9bf77eaa191c | 64,495 | py | Python | tests/test__indexing.py | Defra-Data-Science-Centre-of-Excellence/bng-indexer | 9aaa0883294542b7cf56e3441afb6e2718a4a7c6 | [
"MIT"
] | null | null | null | tests/test__indexing.py | Defra-Data-Science-Centre-of-Excellence/bng-indexer | 9aaa0883294542b7cf56e3441afb6e2718a4a7c6 | [
"MIT"
] | null | null | null | tests/test__indexing.py | Defra-Data-Science-Centre-of-Excellence/bng-indexer | 9aaa0883294542b7cf56e3441afb6e2718a4a7c6 | [
"MIT"
] | null | null | null | """Tests for _indexing module."""
from re import sub
from typing import Sequence, Tuple, Union
import pytest
from shapely.geometry import (
LineString,
MultiLineString,
MultiPoint,
MultiPolygon,
Point,
Polygon,
)
from bng_indexer._indexing import (
_bng_geom_bounding_box,
_bng_geom_index,
_bng_geom_marked,
_bng_multigeom_bounding_box,
_bng_multigeom_index,
_bng_multigeom_marked,
_bng_multipoint,
_bng_point,
_coords_to_bng,
wkt_from_bng,
)
PARAMETERS_COORDS = [
(91794, 10403, 100000, "SV"),
(181069, 32592, 100000, "SW"),
(529912, 179176, 100000, "TQ"),
(327106, 1024213, 100000, "HY"),
(91794, 10403, 1000, "SV9110"),
(181069, 32592, 1000, "SW8132"),
(529912, 179176, 1000, "TQ2979"),
(327106, 1024213, 1000, "HY2724"),
(91794, 10403, 1, "SV9179410403"),
(181069, 32592, 1, "SW8106932592"),
(529912, 179176, 1, "TQ2991279176"),
(327106, 1024213, 1, "HY2710624213"),
(91794.0, 10403.0, 100000, "SV"),
(181069.0, 32592.0, 100000, "SW"),
(529912.0, 179176.0, 100000, "TQ"),
(327106.0, 1024213.0, 100000, "HY"),
(91794.0, 10403.0, 1000, "SV9110"),
(181069.0, 32592.0, 1000, "SW8132"),
(529912.0, 179176.0, 1000, "TQ2979"),
(327106.0, 1024213.0, 1000, "HY2724"),
(91794.0, 10403.0, 1, "SV9179410403"),
(181069.0, 32592.0, 1, "SW8106932592"),
(529912.0, 179176.0, 1, "TQ2991279176"),
(327106.0, 1024213.0, 1, "HY2710624213"),
]
@pytest.mark.parametrize(
"eastings,northings,resolution,expected_bng",
PARAMETERS_COORDS,
ids=[
"Scilly_100km_int",
"Cornwall_100km_int",
"London_100km_int",
"Orkney_100km_int",
"Scilly_1km_int",
"Cornwall_1km_int",
"London_1km_int",
"Orkney_1km_int",
"Scilly_1m_int",
"Cornwall_1m_int",
"London_1m_int",
"Orkney_1m_int",
"Scilly_100km_float",
"Cornwall_100km_float",
"London_100km_float",
"Orkney_100km_float",
"Scilly_1km_float",
"Cornwall_1km_float",
"London_1km_float",
"Orkney_1km_float",
"Scilly_1m_float",
"Cornwall_1m_float",
"London_1m_float",
"Orkney_1m_float",
],
)
def test__coords_to_bng(
eastings: Union[int, float],
northings: Union[int, float],
resolution: int,
expected_bng: str,
) -> None:
"""Returns expected British National Grid reference."""
assert expected_bng == _coords_to_bng(eastings, northings, resolution)
PARAMETERS_POINTS = [
(Point(535000, 181000), 1_000, ["TQ3581", "TQ3580", "TQ3481", "TQ3480"]),
(Point(535000, 181100), 1_000, ["TQ3581", "TQ3481"]),
(Point(535000, 181100), 100, ["TQ350811", "TQ350810", "TQ349811", "TQ349810"]),
(Point(535200, 181100), 1_000, ["TQ3581"]),
(Point(535200, 181100), 100, ["TQ352811", "TQ352810", "TQ351811", "TQ351810"]),
]
@pytest.mark.parametrize(
"geometry,resolution,expected_bng",
PARAMETERS_POINTS,
ids=[
"point_vertex_1km",
"point_edge_1km",
"point_vertex_100m",
"point_1km",
"point_vertex_100m",
],
)
def test__bng_point(
geometry: Point,
resolution: int,
expected_bng: Sequence[str],
) -> None:
"""Returns set of expected British National Grid references."""
assert expected_bng.sort() == _bng_point(geometry, resolution, pad=1).sort()
PARAMETERS_MULTIPOINTS = [
(
MultiPoint(
[(91794, 10403), (181069, 32592), (529912, 179176), (327106, 1024213)]
),
100_000,
["SV", "SW", "TQ", "HY"],
),
(
MultiPoint(
[(91794, 10403), (181069, 32592), (529912, 179176), (327106, 1024213)]
),
1_000,
["SV9110", "SW8132", "TQ2979", "HY2724"],
),
(
MultiPoint(
[
(91794.0, 10403.0),
(181069.0, 32592.0),
(529912.0, 179176.0),
(327106.0, 1024213.0),
]
),
1,
["SV9179410403", "SW8106932592", "TQ2991279176", "HY2710624213"],
),
(
MultiPoint(
[(92000, 10000), (181000, 33000), (529912, 179176), (327106, 1024213)]
),
1_000,
[
"SV9209",
"HY2724",
"SV9210",
"SV9109",
"SW8132",
"SW8133",
"SW8033",
"SW8032",
"TQ2979",
"SV9110",
],
),
]
@pytest.mark.parametrize(
"geometry,resolution,expected_bng",
PARAMETERS_MULTIPOINTS,
ids=[
"multipoint_100km",
"multipoint_1km",
"multipoint_1m",
"multipoint_vertex_1km",
],
)
def test__bng_multipoint(
geometry: MultiPoint,
resolution: int,
expected_bng: Sequence[str],
) -> None:
"""Returns set of expected British National Grid references."""
assert expected_bng.sort() == _bng_multipoint(geometry, resolution, pad=1).sort()
PARAMETERS_POLYGONS = [
(
Polygon(
[
(528111, 180999),
(528777, 182005),
(529816, 181977),
(529990, 180040),
(528878, 180755),
(528111, 180999),
]
),
1000,
["TQ2880", "TQ2881", "TQ2882", "TQ2980", "TQ2981", "TQ2982"],
),
(
Polygon(
[
(528111, 180999),
(528777, 182005),
(529816, 181977),
(529990, 180040),
(528878, 180755),
(528111, 180999),
],
[
[
(528750, 181241),
(529251, 181755),
(529742, 181667),
(529511, 180765),
(528750, 181241),
]
],
),
1000,
["TQ2880", "TQ2881", "TQ2882", "TQ2980", "TQ2981", "TQ2982"],
),
(
Polygon(
[
(528511.0, 180999.0),
(528777.0, 182005.0),
(529816.0, 181977.0),
(529850.0, 180600.0),
(528878.0, 180755.0),
(528511.0, 180999.0),
]
),
100,
[
"TQ285806",
"TQ285807",
"TQ285808",
"TQ285809",
"TQ285810",
"TQ285811",
"TQ285812",
"TQ285813",
"TQ285814",
"TQ285815",
"TQ285816",
"TQ285817",
"TQ285818",
"TQ285819",
"TQ285820",
"TQ286806",
"TQ286807",
"TQ286808",
"TQ286809",
"TQ286810",
"TQ286811",
"TQ286812",
"TQ286813",
"TQ286814",
"TQ286815",
"TQ286816",
"TQ286817",
"TQ286818",
"TQ286819",
"TQ286820",
"TQ287806",
"TQ287807",
"TQ287808",
"TQ287809",
"TQ287810",
"TQ287811",
"TQ287812",
"TQ287813",
"TQ287814",
"TQ287815",
"TQ287816",
"TQ287817",
"TQ287818",
"TQ287819",
"TQ287820",
"TQ288806",
"TQ288807",
"TQ288808",
"TQ288809",
"TQ288810",
"TQ288811",
"TQ288812",
"TQ288813",
"TQ288814",
"TQ288815",
"TQ288816",
"TQ288817",
"TQ288818",
"TQ288819",
"TQ288820",
"TQ289806",
"TQ289807",
"TQ289808",
"TQ289809",
"TQ289810",
"TQ289811",
"TQ289812",
"TQ289813",
"TQ289814",
"TQ289815",
"TQ289816",
"TQ289817",
"TQ289818",
"TQ289819",
"TQ289820",
"TQ290806",
"TQ290807",
"TQ290808",
"TQ290809",
"TQ290810",
"TQ290811",
"TQ290812",
"TQ290813",
"TQ290814",
"TQ290815",
"TQ290816",
"TQ290817",
"TQ290818",
"TQ290819",
"TQ290820",
"TQ291806",
"TQ291807",
"TQ291808",
"TQ291809",
"TQ291810",
"TQ291811",
"TQ291812",
"TQ291813",
"TQ291814",
"TQ291815",
"TQ291816",
"TQ291817",
"TQ291818",
"TQ291819",
"TQ291820",
"TQ292806",
"TQ292807",
"TQ292808",
"TQ292809",
"TQ292810",
"TQ292811",
"TQ292812",
"TQ292813",
"TQ292814",
"TQ292815",
"TQ292816",
"TQ292817",
"TQ292818",
"TQ292819",
"TQ292820",
"TQ293806",
"TQ293807",
"TQ293808",
"TQ293809",
"TQ293810",
"TQ293811",
"TQ293812",
"TQ293813",
"TQ293814",
"TQ293815",
"TQ293816",
"TQ293817",
"TQ293818",
"TQ293819",
"TQ293820",
"TQ294806",
"TQ294807",
"TQ294808",
"TQ294809",
"TQ294810",
"TQ294811",
"TQ294812",
"TQ294813",
"TQ294814",
"TQ294815",
"TQ294816",
"TQ294817",
"TQ294818",
"TQ294819",
"TQ294820",
"TQ295806",
"TQ295807",
"TQ295808",
"TQ295809",
"TQ295810",
"TQ295811",
"TQ295812",
"TQ295813",
"TQ295814",
"TQ295815",
"TQ295816",
"TQ295817",
"TQ295818",
"TQ295819",
"TQ295820",
"TQ296806",
"TQ296807",
"TQ296808",
"TQ296809",
"TQ296810",
"TQ296811",
"TQ296812",
"TQ296813",
"TQ296814",
"TQ296815",
"TQ296816",
"TQ296817",
"TQ296818",
"TQ296819",
"TQ296820",
"TQ297806",
"TQ297807",
"TQ297808",
"TQ297809",
"TQ297810",
"TQ297811",
"TQ297812",
"TQ297813",
"TQ297814",
"TQ297815",
"TQ297816",
"TQ297817",
"TQ297818",
"TQ297819",
"TQ297820",
"TQ298806",
"TQ298807",
"TQ298808",
"TQ298809",
"TQ298810",
"TQ298811",
"TQ298812",
"TQ298813",
"TQ298814",
"TQ298815",
"TQ298816",
"TQ298817",
"TQ298818",
"TQ298819",
"TQ298820",
],
),
(
Polygon(
[
(528511.0, 180999.0),
(528777.0, 182005.0),
(529816.0, 181977.0),
(529850.0, 180600.0),
(528878.0, 180755.0),
(528511.0, 180999.0),
],
[
[
(528750.0, 181241.0),
(529251.0, 181755.0),
(529742.0, 181667.0),
(529511.0, 180765.0),
(528750.0, 181241.0),
]
],
),
100,
[
"TQ285806",
"TQ285807",
"TQ285808",
"TQ285809",
"TQ285810",
"TQ285811",
"TQ285812",
"TQ285813",
"TQ285814",
"TQ285815",
"TQ285816",
"TQ285817",
"TQ285818",
"TQ285819",
"TQ285820",
"TQ286806",
"TQ286807",
"TQ286808",
"TQ286809",
"TQ286810",
"TQ286811",
"TQ286812",
"TQ286813",
"TQ286814",
"TQ286815",
"TQ286816",
"TQ286817",
"TQ286818",
"TQ286819",
"TQ286820",
"TQ287806",
"TQ287807",
"TQ287808",
"TQ287809",
"TQ287810",
"TQ287811",
"TQ287812",
"TQ287813",
"TQ287814",
"TQ287815",
"TQ287816",
"TQ287817",
"TQ287818",
"TQ287819",
"TQ287820",
"TQ288806",
"TQ288807",
"TQ288808",
"TQ288809",
"TQ288810",
"TQ288811",
"TQ288812",
"TQ288813",
"TQ288814",
"TQ288815",
"TQ288816",
"TQ288817",
"TQ288818",
"TQ288819",
"TQ288820",
"TQ289806",
"TQ289807",
"TQ289808",
"TQ289809",
"TQ289810",
"TQ289811",
"TQ289812",
"TQ289813",
"TQ289814",
"TQ289815",
"TQ289816",
"TQ289817",
"TQ289818",
"TQ289819",
"TQ289820",
"TQ290806",
"TQ290807",
"TQ290808",
"TQ290809",
"TQ290810",
"TQ290811",
"TQ290812",
"TQ290813",
"TQ290814",
"TQ290815",
"TQ290816",
"TQ290817",
"TQ290818",
"TQ290819",
"TQ290820",
"TQ291806",
"TQ291807",
"TQ291808",
"TQ291809",
"TQ291810",
"TQ291811",
"TQ291812",
"TQ291813",
"TQ291814",
"TQ291815",
"TQ291816",
"TQ291817",
"TQ291818",
"TQ291819",
"TQ291820",
"TQ292806",
"TQ292807",
"TQ292808",
"TQ292809",
"TQ292810",
"TQ292811",
"TQ292812",
"TQ292813",
"TQ292814",
"TQ292815",
"TQ292816",
"TQ292817",
"TQ292818",
"TQ292819",
"TQ292820",
"TQ293806",
"TQ293807",
"TQ293808",
"TQ293809",
"TQ293810",
"TQ293811",
"TQ293812",
"TQ293813",
"TQ293814",
"TQ293815",
"TQ293816",
"TQ293817",
"TQ293818",
"TQ293819",
"TQ293820",
"TQ294806",
"TQ294807",
"TQ294808",
"TQ294809",
"TQ294810",
"TQ294811",
"TQ294812",
"TQ294813",
"TQ294814",
"TQ294815",
"TQ294816",
"TQ294817",
"TQ294818",
"TQ294819",
"TQ294820",
"TQ295806",
"TQ295807",
"TQ295808",
"TQ295809",
"TQ295810",
"TQ295811",
"TQ295812",
"TQ295813",
"TQ295814",
"TQ295815",
"TQ295816",
"TQ295817",
"TQ295818",
"TQ295819",
"TQ295820",
"TQ296806",
"TQ296807",
"TQ296808",
"TQ296809",
"TQ296810",
"TQ296811",
"TQ296812",
"TQ296813",
"TQ296814",
"TQ296815",
"TQ296816",
"TQ296817",
"TQ296818",
"TQ296819",
"TQ296820",
"TQ297806",
"TQ297807",
"TQ297808",
"TQ297809",
"TQ297810",
"TQ297811",
"TQ297812",
"TQ297813",
"TQ297814",
"TQ297815",
"TQ297816",
"TQ297817",
"TQ297818",
"TQ297819",
"TQ297820",
"TQ298806",
"TQ298807",
"TQ298808",
"TQ298809",
"TQ298810",
"TQ298811",
"TQ298812",
"TQ298813",
"TQ298814",
"TQ298815",
"TQ298816",
"TQ298817",
"TQ298818",
"TQ298819",
"TQ298820",
],
),
]
PARAMETERS_LINESTRINGS = [
(
LineString(
[(529000, 185000), (530500, 185200), (531111, 184990), (532000, 185000)]
),
1_000,
[
"TQ2884",
"TQ2885",
"TQ2984",
"TQ2985",
"TQ3084",
"TQ3085",
"TQ3184",
"TQ3185",
"TQ3284",
"TQ3285",
],
),
(
LineString([(532000, 185000), (532099, 185099)]),
100,
["TQ319849", "TQ319850", "TQ320849", "TQ320850"],
),
]
@pytest.mark.parametrize(
"geometry,resolution,expected_bng",
PARAMETERS_POLYGONS + PARAMETERS_LINESTRINGS,
ids=[
"polygon_1km",
"polygon_hole_1km",
"polygon_100m",
"polygon_hole_100m",
"linestring_1km",
"linestring_100m",
],
)
def test__bng_geom_bounding_box(
geometry: Union[LineString, Polygon], resolution: int, expected_bng: Sequence[str]
) -> None:
"""Returns set of expected British National Grid references."""
assert (
expected_bng.sort()
== _bng_geom_bounding_box(geometry, resolution, pad=1).sort()
)
PARAMETERS_MULTIPOLYGONS = [
(
MultiPolygon(
[
Polygon(
[
(529999, 179999),
(531000, 183000),
(531500, 185000),
(529000, 185500),
(527499, 181000),
(528500, 180500),
(528750, 182000),
(529750, 183000),
(529500, 179500),
]
),
Polygon(
[
(530600, 181000),
(531000, 182000),
(531800, 181500),
(531200, 180000),
(530600, 181000),
]
),
]
),
1_000,
[
"TQ2782",
"TQ2883",
"TQ2884",
"TQ2880",
"TQ3080",
"TQ2780",
"TQ3184",
"TQ2982",
"TQ3081",
"TQ3182",
"TQ2981",
"TQ3085",
"TQ2779",
"TQ2979",
"TQ2885",
"TQ3180",
"TQ3183",
"TQ3079",
"TQ2985",
"TQ3084",
"TQ3179",
"TQ2784",
"TQ3083",
"TQ3082",
"TQ3185",
"TQ2783",
"TQ2980",
"TQ2785",
"TQ2781",
"TQ2881",
"TQ2882",
"TQ2879",
"TQ2984",
"TQ3181",
"TQ2983",
],
),
(
MultiPolygon(
[
Polygon(
[
(529999, 179999),
(531000, 183000),
(531500, 185000),
(529000, 185500),
(527499, 181000),
(528500, 180500),
(528750, 182000),
(529750, 183000),
(529500, 179500),
]
),
Polygon(
[
(530600, 181000),
(531000, 182000),
(532200, 181500),
(531200, 180000),
(530600, 181000),
]
),
]
),
1_000,
[
"TQ2782",
"TQ2883",
"TQ2884",
"TQ2880",
"TQ3282",
"TQ3080",
"TQ2780",
"TQ3184",
"TQ2982",
"TQ3081",
"TQ3182",
"TQ3279",
"TQ2981",
"TQ3085",
"TQ2779",
"TQ2979",
"TQ2885",
"TQ3281",
"TQ3180",
"TQ3183",
"TQ3079",
"TQ2985",
"TQ3084",
"TQ3179",
"TQ2784",
"TQ3083",
"TQ3082",
"TQ3185",
"TQ2783",
"TQ2980",
"TQ2785",
"TQ2781",
"TQ2881",
"TQ2882",
"TQ2879",
"TQ2984",
"TQ3181",
"TQ2983",
"TQ3280",
],
),
(
MultiPolygon(
[
Polygon(
[
(529999, 179999),
(531000, 183000),
(531500, 185000),
(529000, 185500),
(527499, 181000),
(528500, 180500),
(528750, 182000),
(529750, 183000),
(529500, 179500),
]
),
Polygon(
[
(533600, 181000),
(534000, 182000),
(535200, 181500),
(534200, 180000),
(533600, 181000),
]
),
]
),
1_000,
[
"TQ2782",
"TQ2883",
"TQ2884",
"TQ2880",
"TQ3580",
"TQ3080",
"TQ2780",
"TQ3184",
"TQ3382",
"TQ2982",
"TQ3479",
"TQ3480",
"TQ3081",
"TQ3182",
"TQ3482",
"TQ3579",
"TQ2981",
"TQ3380",
"TQ3085",
"TQ3481",
"TQ3381",
"TQ2779",
"TQ2979",
"TQ2885",
"TQ3180",
"TQ3183",
"TQ3582",
"TQ3079",
"TQ2985",
"TQ3084",
"TQ3179",
"TQ2784",
"TQ3083",
"TQ3082",
"TQ3185",
"TQ2783",
"TQ2980",
"TQ2785",
"TQ2781",
"TQ2881",
"TQ2882",
"TQ2879",
"TQ3581",
"TQ2984",
"TQ3181",
"TQ3379",
"TQ2983",
],
),
]
PARAMETERS_MULTILINESTRINGS = [
(
MultiLineString(
[
LineString(
[
(529000, 185000),
(530500, 185200),
(531111, 184990),
(531990, 184950),
]
),
LineString([(531990, 184950), (532099, 185099)]),
]
),
1_000,
[
"TQ2884",
"TQ2885",
"TQ2984",
"TQ2985",
"TQ3084",
"TQ3085",
"TQ3184",
"TQ3185",
"TQ3284",
"TQ3285",
],
),
(
MultiLineString(
[
LineString(
[
(529000, 185000),
(530500, 185200),
(531111, 184990),
(531990, 184950),
]
),
LineString([(530990, 181950), (531099, 182099)]),
]
),
1_000,
[
"TQ3082",
"TQ2885",
"TQ2884",
"TQ3182",
"TQ3084",
"TQ2985",
"TQ2984",
"TQ3184",
"TQ3085",
"TQ3181",
"TQ3081",
"TQ3185",
],
),
]
@pytest.mark.parametrize(
"geometry,resolution,expected_bng",
PARAMETERS_MULTIPOLYGONS + PARAMETERS_MULTILINESTRINGS,
ids=[
"multipoly_simple_1km",
"multipoly_merge_1km",
"multipoly_separate_1km",
"multiline_complex_1km",
"multiline_simple_1km",
],
)
def test__bng_multigeom_bounding_box(
geometry: Union[MultiLineString, MultiPolygon],
resolution: int,
expected_bng: Sequence[str],
) -> None:
"""Returns set of expected British National Grid references."""
assert (
expected_bng.sort()
== _bng_multigeom_bounding_box(geometry, resolution, pad=1).sort()
)
PARAMETERS_INDEX_POLYGONS = [
(
Polygon(
[
(528111, 180999),
(528777, 182005),
(529816, 181977),
(529990, 180040),
(528878, 180755),
(528111, 180999),
]
),
1000,
["TQ2880", "TQ2881", "TQ2882", "TQ2980", "TQ2981"],
),
(
Polygon(
[
(528111, 180999),
(528777, 182005),
(529816, 181977),
(529990, 180040),
(528878, 180755),
(528111, 180999),
],
[
[
(528750, 181241),
(529251, 181755),
(529742, 181667),
(529511, 180765),
(528750, 181241),
]
],
),
1000,
["TQ2880", "TQ2881", "TQ2882", "TQ2980", "TQ2981"],
),
(
Polygon(
[
(528511.0, 180999.0),
(528777.0, 182005.0),
(529816.0, 181977.0),
(529850.0, 180600.0),
(528878.0, 180755.0),
(528511.0, 180999.0),
]
),
100,
[
"TQ285809",
"TQ285810",
"TQ285811",
"TQ285812",
"TQ285813",
"TQ286808",
"TQ286809",
"TQ286810",
"TQ286811",
"TQ286812",
"TQ286813",
"TQ286814",
"TQ286815",
"TQ286816",
"TQ286817",
"TQ287808",
"TQ287809",
"TQ287810",
"TQ287811",
"TQ287812",
"TQ287813",
"TQ287814",
"TQ287815",
"TQ287816",
"TQ287817",
"TQ287818",
"TQ287819",
"TQ287820",
"TQ288807",
"TQ288808",
"TQ288809",
"TQ288810",
"TQ288811",
"TQ288812",
"TQ288813",
"TQ288814",
"TQ288815",
"TQ288816",
"TQ288817",
"TQ288818",
"TQ288819",
"TQ288820",
"TQ289807",
"TQ289808",
"TQ289809",
"TQ289810",
"TQ289811",
"TQ289812",
"TQ289813",
"TQ289814",
"TQ289815",
"TQ289816",
"TQ289817",
"TQ289818",
"TQ289819",
"TQ289820",
"TQ290807",
"TQ290808",
"TQ290809",
"TQ290810",
"TQ290811",
"TQ290812",
"TQ290813",
"TQ290814",
"TQ290815",
"TQ290816",
"TQ290817",
"TQ290818",
"TQ290819",
"TQ291807",
"TQ291808",
"TQ291809",
"TQ291810",
"TQ291811",
"TQ291812",
"TQ291813",
"TQ291814",
"TQ291815",
"TQ291816",
"TQ291817",
"TQ291818",
"TQ291819",
"TQ292806",
"TQ292807",
"TQ292808",
"TQ292809",
"TQ292810",
"TQ292811",
"TQ292812",
"TQ292813",
"TQ292814",
"TQ292815",
"TQ292816",
"TQ292817",
"TQ292818",
"TQ292819",
"TQ293806",
"TQ293807",
"TQ293808",
"TQ293809",
"TQ293810",
"TQ293811",
"TQ293812",
"TQ293813",
"TQ293814",
"TQ293815",
"TQ293816",
"TQ293817",
"TQ293818",
"TQ293819",
"TQ294806",
"TQ294807",
"TQ294808",
"TQ294809",
"TQ294810",
"TQ294811",
"TQ294812",
"TQ294813",
"TQ294814",
"TQ294815",
"TQ294816",
"TQ294817",
"TQ294818",
"TQ294819",
"TQ295806",
"TQ295807",
"TQ295808",
"TQ295809",
"TQ295810",
"TQ295811",
"TQ295812",
"TQ295813",
"TQ295814",
"TQ295815",
"TQ295816",
"TQ295817",
"TQ295818",
"TQ295819",
"TQ296806",
"TQ296807",
"TQ296808",
"TQ296809",
"TQ296810",
"TQ296811",
"TQ296812",
"TQ296813",
"TQ296814",
"TQ296815",
"TQ296816",
"TQ296817",
"TQ296818",
"TQ296819",
"TQ297806",
"TQ297807",
"TQ297808",
"TQ297809",
"TQ297810",
"TQ297811",
"TQ297812",
"TQ297813",
"TQ297814",
"TQ297815",
"TQ297816",
"TQ297817",
"TQ297818",
"TQ297819",
"TQ298805",
"TQ298806",
"TQ298807",
"TQ298808",
"TQ298809",
"TQ298810",
"TQ298811",
"TQ298812",
"TQ298813",
"TQ298814",
"TQ298815",
"TQ298816",
"TQ298817",
"TQ298818",
"TQ298819",
],
),
(
Polygon(
[
(528511.0, 180999.0),
(528777.0, 182005.0),
(529816.0, 181977.0),
(529850.0, 180600.0),
(528878.0, 180755.0),
(528511.0, 180999.0),
],
[
[
(528750.0, 181241.0),
(529251.0, 181755.0),
(529742.0, 181667.0),
(529511.0, 180765.0),
(528750.0, 181241.0),
]
],
),
100,
[
"TQ285809",
"TQ285810",
"TQ285811",
"TQ285812",
"TQ285813",
"TQ286808",
"TQ286809",
"TQ286810",
"TQ286811",
"TQ286812",
"TQ286813",
"TQ286814",
"TQ286815",
"TQ286816",
"TQ286817",
"TQ287808",
"TQ287809",
"TQ287810",
"TQ287811",
"TQ287812",
"TQ287813",
"TQ287814",
"TQ287815",
"TQ287816",
"TQ287817",
"TQ287818",
"TQ287819",
"TQ287820",
"TQ288807",
"TQ288808",
"TQ288809",
"TQ288810",
"TQ288811",
"TQ288812",
"TQ288813",
"TQ288814",
"TQ288815",
"TQ288816",
"TQ288817",
"TQ288818",
"TQ288819",
"TQ288820",
"TQ289807",
"TQ289808",
"TQ289809",
"TQ289810",
"TQ289811",
"TQ289813",
"TQ289814",
"TQ289815",
"TQ289816",
"TQ289817",
"TQ289818",
"TQ289819",
"TQ289820",
"TQ290807",
"TQ290808",
"TQ290809",
"TQ290810",
"TQ290814",
"TQ290815",
"TQ290816",
"TQ290817",
"TQ290818",
"TQ290819",
"TQ291807",
"TQ291808",
"TQ291809",
"TQ291810",
"TQ291816",
"TQ291817",
"TQ291818",
"TQ291819",
"TQ292806",
"TQ292807",
"TQ292808",
"TQ292809",
"TQ292817",
"TQ292818",
"TQ292819",
"TQ293806",
"TQ293807",
"TQ293808",
"TQ293817",
"TQ293818",
"TQ293819",
"TQ294806",
"TQ294807",
"TQ294808",
"TQ294817",
"TQ294818",
"TQ294819",
"TQ295806",
"TQ295807",
"TQ295808",
"TQ295809",
"TQ295810",
"TQ295811",
"TQ295816",
"TQ295817",
"TQ295818",
"TQ295819",
"TQ296806",
"TQ296807",
"TQ296808",
"TQ296809",
"TQ296810",
"TQ296811",
"TQ296812",
"TQ296813",
"TQ296814",
"TQ296815",
"TQ296816",
"TQ296817",
"TQ296818",
"TQ296819",
"TQ297806",
"TQ297807",
"TQ297808",
"TQ297809",
"TQ297810",
"TQ297811",
"TQ297812",
"TQ297813",
"TQ297814",
"TQ297815",
"TQ297816",
"TQ297817",
"TQ297818",
"TQ297819",
"TQ298805",
"TQ298806",
"TQ298807",
"TQ298808",
"TQ298809",
"TQ298810",
"TQ298811",
"TQ298812",
"TQ298813",
"TQ298814",
"TQ298815",
"TQ298816",
"TQ298817",
"TQ298818",
"TQ298819",
],
),
]
PARAMETERS_INDEX_LINESTRINGS = [
(
LineString(
[(529000, 185000), (530500, 185200), (531111, 184990), (532000, 185000)]
),
1_000,
[
"TQ2884",
"TQ2885",
"TQ2984",
"TQ2985",
"TQ3085",
"TQ3184",
"TQ3185",
"TQ3284",
"TQ3285",
],
),
(
LineString([(532000, 185000), (532099, 185099)]),
100,
["TQ319849", "TQ319850", "TQ320849", "TQ320850"],
),
]
@pytest.mark.parametrize(
"geometry,resolution,expected_bng",
PARAMETERS_INDEX_POLYGONS + PARAMETERS_INDEX_LINESTRINGS,
ids=[
"polygon_index_1km",
"polygon_index_hole_1km",
"polygon_index_100m",
"polygon_index_hole_100m",
"linestring_index_1km",
"linestring_index_100m",
],
)
def test__bng_geom_index(
geometry: Union[LineString, Polygon], resolution: int, expected_bng: Sequence[str]
) -> None:
"""Returns set of expected British National Grid references."""
assert expected_bng.sort() == _bng_geom_index(geometry, resolution, pad=1).sort()
PARAMETERS_INDEX_MULTIPOLYGONS = [
(
MultiPolygon(
[
Polygon(
[
(529999, 179999),
(531000, 183000),
(531500, 185000),
(529000, 185500),
(527499, 181000),
(528500, 180500),
(528750, 182000),
(529750, 183000),
(529500, 179500),
]
),
Polygon(
[
(530600, 181000),
(531000, 182000),
(531800, 181500),
(531200, 180000),
(530600, 181000),
]
),
]
),
1_000,
[
"TQ2981",
"TQ3083",
"TQ2884",
"TQ2985",
"TQ3179",
"TQ2880",
"TQ2984",
"TQ3180",
"TQ2983",
"TQ3185",
"TQ2982",
"TQ2882",
"TQ2782",
"TQ2881",
"TQ3181",
"TQ2885",
"TQ2883",
"TQ3183",
"TQ3085",
"TQ3084",
"TQ2781",
"TQ3081",
"TQ3182",
"TQ2780",
"TQ3082",
"TQ2980",
"TQ3080",
"TQ2979",
"TQ3184",
],
),
(
MultiPolygon(
[
Polygon(
[
(529999, 179999),
(531000, 183000),
(531500, 185000),
(529000, 185500),
(527499, 181000),
(528500, 180500),
(528750, 182000),
(529750, 183000),
(529500, 179500),
]
),
Polygon(
[
(530600, 181000),
(531000, 182000),
(532200, 181500),
(531200, 180000),
(530600, 181000),
]
),
]
),
1_000,
[
"TQ3281",
"TQ2981",
"TQ3083",
"TQ2884",
"TQ2985",
"TQ3179",
"TQ2880",
"TQ2984",
"TQ3180",
"TQ2983",
"TQ3185",
"TQ2982",
"TQ2882",
"TQ2782",
"TQ2881",
"TQ3181",
"TQ2885",
"TQ2883",
"TQ3183",
"TQ3085",
"TQ3084",
"TQ2781",
"TQ3081",
"TQ3182",
"TQ2780",
"TQ3082",
"TQ2980",
"TQ3080",
"TQ2979",
"TQ3184",
],
),
(
MultiPolygon(
[
Polygon(
[
(529999, 179999),
(531000, 183000),
(531500, 185000),
(529000, 185500),
(527499, 181000),
(528500, 180500),
(528750, 182000),
(529750, 183000),
(529500, 179500),
]
),
Polygon(
[
(533600, 181000),
(534000, 182000),
(535200, 181500),
(534200, 180000),
(533600, 181000),
]
),
]
),
1_000,
[
"TQ2981",
"TQ3083",
"TQ3380",
"TQ2884",
"TQ2985",
"TQ3382",
"TQ2880",
"TQ2984",
"TQ2983",
"TQ3381",
"TQ3185",
"TQ2982",
"TQ3581",
"TQ2882",
"TQ2782",
"TQ3480",
"TQ2881",
"TQ2885",
"TQ3479",
"TQ2883",
"TQ3482",
"TQ3183",
"TQ3085",
"TQ3084",
"TQ2781",
"TQ3081",
"TQ3182",
"TQ2780",
"TQ3082",
"TQ2980",
"TQ3080",
"TQ3481",
"TQ2979",
"TQ3184",
],
),
]
PARAMETERS_INDEX_MULTILINESTRINGS = [
(
MultiLineString(
[
LineString(
[
(529000, 185000),
(530500, 185200),
(531111, 184990),
(531990, 184950),
]
),
LineString([(531990, 184950), (532099, 185099)]),
]
),
1_000,
[
"TQ3085",
"TQ3285",
"TQ3185",
"TQ2884",
"TQ3284",
"TQ2985",
"TQ2885",
"TQ2984",
"TQ3184",
],
),
(
MultiLineString(
[
LineString(
[
(529000, 185000),
(530500, 185200),
(531111, 184990),
(531990, 184950),
]
),
LineString([(530990, 181950), (531099, 182099)]),
]
),
1_000,
[
"TQ3085",
"TQ3081",
"TQ3182",
"TQ3185",
"TQ2884",
"TQ3181",
"TQ2985",
"TQ2885",
"TQ2984",
"TQ3184",
],
),
]
@pytest.mark.parametrize(
"geometry,resolution,expected_bng",
PARAMETERS_INDEX_MULTIPOLYGONS + PARAMETERS_INDEX_MULTILINESTRINGS,
ids=[
"multipoly_index_simple_1km",
"multipoly_index_merge_1km",
"multipoly_index_separate_1km",
"multiline_index_complex_1km",
"multiline_index_simple_1km",
],
)
def test__bng_multigeom_index(
geometry: Union[MultiLineString, MultiPolygon],
resolution: int,
expected_bng: Sequence[str],
) -> None:
"""Returns set of expected British National Grid references."""
assert (
expected_bng.sort() == _bng_multigeom_index(geometry, resolution, pad=1).sort()
)
PARAMETERS_MARKED_POLYGONS = [
(
Polygon(
[
(528111, 180999),
(528777, 182005),
(529816, 181977),
(529990, 180040),
(528878, 180755),
(528111, 180999),
]
),
1000,
[
("TQ2880", False),
("TQ2881", False),
("TQ2882", False),
("TQ2980", False),
("TQ2981", False),
],
),
(
Polygon(
[
(528111, 180999),
(528777, 182005),
(529816, 181977),
(529990, 180040),
(528878, 180755),
(528111, 180999),
],
[
[
(528750, 181241),
(529251, 181755),
(529742, 181667),
(529511, 180765),
(528750, 181241),
]
],
),
1000,
[
("TQ2880", False),
("TQ2881", False),
("TQ2882", False),
("TQ2980", False),
("TQ2981", False),
],
),
(
Polygon(
[
(528511.0, 180999.0),
(528777.0, 182005.0),
(529816.0, 181977.0),
(529850.0, 180600.0),
(528878.0, 180755.0),
(528511.0, 180999.0),
]
),
100,
[
("TQ285809", False),
("TQ285810", False),
("TQ285811", False),
("TQ285812", False),
("TQ285813", False),
("TQ286808", False),
("TQ286809", False),
("TQ286810", True),
("TQ286811", True),
("TQ286812", True),
("TQ286813", False),
("TQ286814", False),
("TQ286815", False),
("TQ286816", False),
("TQ286817", False),
("TQ287808", False),
("TQ287809", True),
("TQ287810", True),
("TQ287811", True),
("TQ287812", True),
("TQ287813", True),
("TQ287814", True),
("TQ287815", True),
("TQ287816", True),
("TQ287817", False),
("TQ287818", False),
("TQ287819", False),
("TQ287820", False),
("TQ288807", False),
("TQ288808", False),
("TQ288809", True),
("TQ288810", True),
("TQ288811", True),
("TQ288812", True),
("TQ288813", True),
("TQ288814", True),
("TQ288815", True),
("TQ288816", True),
("TQ288817", True),
("TQ288818", True),
("TQ288819", True),
("TQ288820", False),
("TQ289807", False),
("TQ289808", True),
("TQ289809", True),
("TQ289810", True),
("TQ289811", True),
("TQ289812", True),
("TQ289813", True),
("TQ289814", True),
("TQ289815", True),
("TQ289816", True),
("TQ289817", True),
("TQ289818", True),
("TQ289819", False),
("TQ289820", False),
("TQ290807", False),
("TQ290808", True),
("TQ290809", True),
("TQ290810", True),
("TQ290811", True),
("TQ290812", True),
("TQ290813", True),
("TQ290814", True),
("TQ290815", True),
("TQ290816", True),
("TQ290817", True),
("TQ290818", True),
("TQ290819", False),
("TQ291807", False),
("TQ291808", True),
("TQ291809", True),
("TQ291810", True),
("TQ291811", True),
("TQ291812", True),
("TQ291813", True),
("TQ291814", True),
("TQ291815", True),
("TQ291816", True),
("TQ291817", True),
("TQ291818", True),
("TQ291819", False),
("TQ292806", False),
("TQ292807", False),
("TQ292808", True),
("TQ292809", True),
("TQ292810", True),
("TQ292811", True),
("TQ292812", True),
("TQ292813", True),
("TQ292814", True),
("TQ292815", True),
("TQ292816", True),
("TQ292817", True),
("TQ292818", True),
("TQ292819", False),
("TQ293806", False),
("TQ293807", True),
("TQ293808", True),
("TQ293809", True),
("TQ293810", True),
("TQ293811", True),
("TQ293812", True),
("TQ293813", True),
("TQ293814", True),
("TQ293815", True),
("TQ293816", True),
("TQ293817", True),
("TQ293818", True),
("TQ293819", False),
("TQ294806", False),
("TQ294807", True),
("TQ294808", True),
("TQ294809", True),
("TQ294810", True),
("TQ294811", True),
("TQ294812", True),
("TQ294813", True),
("TQ294814", True),
("TQ294815", True),
("TQ294816", True),
("TQ294817", True),
("TQ294818", True),
("TQ294819", False),
("TQ295806", False),
("TQ295807", True),
("TQ295808", True),
("TQ295809", True),
("TQ295810", True),
("TQ295811", True),
("TQ295812", True),
("TQ295813", True),
("TQ295814", True),
("TQ295815", True),
("TQ295816", True),
("TQ295817", True),
("TQ295818", True),
("TQ295819", False),
("TQ296806", False),
("TQ296807", True),
("TQ296808", True),
("TQ296809", True),
("TQ296810", True),
("TQ296811", True),
("TQ296812", True),
("TQ296813", True),
("TQ296814", True),
("TQ296815", True),
("TQ296816", True),
("TQ296817", True),
("TQ296818", True),
("TQ296819", False),
("TQ297806", False),
("TQ297807", True),
("TQ297808", True),
("TQ297809", True),
("TQ297810", True),
("TQ297811", True),
("TQ297812", True),
("TQ297813", True),
("TQ297814", True),
("TQ297815", True),
("TQ297816", True),
("TQ297817", True),
("TQ297818", True),
("TQ297819", False),
("TQ298805", False),
("TQ298806", False),
("TQ298807", False),
("TQ298808", False),
("TQ298809", False),
("TQ298810", False),
("TQ298811", False),
("TQ298812", False),
("TQ298813", False),
("TQ298814", False),
("TQ298815", False),
("TQ298816", False),
("TQ298817", False),
("TQ298818", False),
("TQ298819", False),
],
),
(
Polygon(
[
(528511.0, 180999.0),
(528777.0, 182005.0),
(529816.0, 181977.0),
(529850.0, 180600.0),
(528878.0, 180755.0),
(528511.0, 180999.0),
],
[
[
(528750.0, 181241.0),
(529251.0, 181755.0),
(529742.0, 181667.0),
(529511.0, 180765.0),
(528750.0, 181241.0),
]
],
),
100,
[
("TQ285809", False),
("TQ285810", False),
("TQ285811", False),
("TQ285812", False),
("TQ285813", False),
("TQ286808", False),
("TQ286809", False),
("TQ286810", True),
("TQ286811", True),
("TQ286812", True),
("TQ286813", False),
("TQ286814", False),
("TQ286815", False),
("TQ286816", False),
("TQ286817", False),
("TQ287808", False),
("TQ287809", True),
("TQ287810", True),
("TQ287811", True),
("TQ287812", False),
("TQ287813", True),
("TQ287814", True),
("TQ287815", True),
("TQ287816", True),
("TQ287817", False),
("TQ287818", False),
("TQ287819", False),
("TQ287820", False),
("TQ288807", False),
("TQ288808", False),
("TQ288809", True),
("TQ288810", True),
("TQ288811", False),
("TQ288812", False),
("TQ288813", False),
("TQ288814", True),
("TQ288815", True),
("TQ288816", True),
("TQ288817", True),
("TQ288818", True),
("TQ288819", True),
("TQ288820", False),
("TQ289807", False),
("TQ289808", True),
("TQ289809", True),
("TQ289810", False),
("TQ289811", False),
("TQ289813", False),
("TQ289814", False),
("TQ289815", True),
("TQ289816", True),
("TQ289817", True),
("TQ289818", True),
("TQ289819", False),
("TQ289820", False),
("TQ290807", False),
("TQ290808", True),
("TQ290809", True),
("TQ290810", False),
("TQ290814", False),
("TQ290815", False),
("TQ290816", False),
("TQ290817", True),
("TQ290818", True),
("TQ290819", False),
("TQ291807", False),
("TQ291808", True),
("TQ291809", False),
("TQ291810", False),
("TQ291816", False),
("TQ291817", False),
("TQ291818", True),
("TQ291819", False),
("TQ292806", False),
("TQ292807", False),
("TQ292808", False),
("TQ292809", False),
("TQ292817", False),
("TQ292818", True),
("TQ292819", False),
("TQ293806", False),
("TQ293807", True),
("TQ293808", False),
("TQ293817", False),
("TQ293818", True),
("TQ293819", False),
("TQ294806", False),
("TQ294807", False),
("TQ294808", False),
("TQ294817", False),
("TQ294818", True),
("TQ294819", False),
("TQ295806", False),
("TQ295807", False),
("TQ295808", False),
("TQ295809", False),
("TQ295810", False),
("TQ295811", False),
("TQ295816", False),
("TQ295817", False),
("TQ295818", True),
("TQ295819", False),
("TQ296806", False),
("TQ296807", True),
("TQ296808", True),
("TQ296809", True),
("TQ296810", True),
("TQ296811", False),
("TQ296812", False),
("TQ296813", False),
("TQ296814", False),
("TQ296815", False),
("TQ296816", False),
("TQ296817", True),
("TQ296818", True),
("TQ296819", False),
("TQ297806", False),
("TQ297807", True),
("TQ297808", True),
("TQ297809", True),
("TQ297810", True),
("TQ297811", True),
("TQ297812", True),
("TQ297813", True),
("TQ297814", True),
("TQ297815", False),
("TQ297816", False),
("TQ297817", True),
("TQ297818", True),
("TQ297819", False),
("TQ298805", False),
("TQ298806", False),
("TQ298807", False),
("TQ298808", False),
("TQ298809", False),
("TQ298810", False),
("TQ298811", False),
("TQ298812", False),
("TQ298813", False),
("TQ298814", False),
("TQ298815", False),
("TQ298816", False),
("TQ298817", False),
("TQ298818", False),
("TQ298819", False),
],
),
]
@pytest.mark.parametrize(
"geometry,resolution,expected_bng",
PARAMETERS_MARKED_POLYGONS,
ids=[
"polygon_marked_1km",
"polygon_marked_hole_1km",
"polygon_marked_100m",
"polygon_marked_hole_100m",
],
)
def test__bng_geom_marked(
geometry: Polygon, resolution: int, expected_bng: Sequence[Tuple[str, bool]]
) -> None:
"""Returns set of expected British National Grid references."""
assert expected_bng.sort(key=lambda x: x[0]) == _bng_geom_marked(
geometry, resolution, pad=1
).sort(key=lambda x: x[0])
PARAMETERS_MARKED_MULTIPOLYGONS = [
(
MultiPolygon(
[
Polygon(
[
(529999, 179999),
(531000, 183000),
(531500, 185000),
(529000, 185500),
(527499, 181000),
(528500, 180500),
(528750, 182000),
(529750, 183000),
(529500, 179500),
]
),
Polygon(
[
(530600, 181000),
(531000, 182000),
(531800, 181500),
(531200, 180000),
(530600, 181000),
]
),
]
),
1_000,
[
("TQ3180", False),
("TQ2882", False),
("TQ3184", False),
("TQ2883", False),
("TQ2780", False),
("TQ2781", False),
("TQ3182", False),
("TQ3082", False),
("TQ2979", False),
("TQ2981", False),
("TQ2885", False),
("TQ2884", False),
("TQ3183", False),
("TQ2982", False),
("TQ2782", False),
("TQ3081", False),
("TQ2881", False),
("TQ2880", False),
("TQ3185", False),
("TQ3083", False),
("TQ2980", False),
("TQ2984", True),
("TQ2983", False),
("TQ2985", False),
("TQ3181", False),
("TQ3084", True),
("TQ3080", False),
("TQ3085", False),
("TQ3179", False),
],
),
(
MultiPolygon(
[
Polygon(
[
(529999, 179999),
(531000, 183000),
(531500, 185000),
(529000, 185500),
(527499, 181000),
(528500, 180500),
(528750, 182000),
(529750, 183000),
(529500, 179500),
]
),
Polygon(
[
(530600, 181000),
(531000, 182000),
(532200, 181500),
(531200, 180000),
(530600, 181000),
]
),
]
),
1_000,
[
("TQ3180", False),
("TQ3281", False),
("TQ2882", False),
("TQ3184", False),
("TQ2883", False),
("TQ2780", False),
("TQ2781", False),
("TQ3182", False),
("TQ3082", False),
("TQ2979", False),
("TQ2981", False),
("TQ2885", False),
("TQ2884", False),
("TQ3183", False),
("TQ2982", False),
("TQ2782", False),
("TQ3081", False),
("TQ2881", False),
("TQ2880", False),
("TQ3185", False),
("TQ3083", False),
("TQ2980", False),
("TQ2984", True),
("TQ2983", False),
("TQ2985", False),
("TQ3181", False),
("TQ3084", True),
("TQ3080", False),
("TQ3085", False),
("TQ3179", False),
],
),
(
MultiPolygon(
[
Polygon(
[
(529999, 179999),
(531000, 183000),
(531500, 185000),
(529000, 185500),
(527499, 181000),
(528500, 180500),
(528750, 182000),
(529750, 183000),
(529500, 179500),
]
),
Polygon(
[
(533600, 181000),
(534000, 182000),
(535200, 181500),
(534200, 180000),
(533600, 181000),
]
),
]
),
1_000,
[
("TQ2882", False),
("TQ3184", False),
("TQ2883", False),
("TQ2780", False),
("TQ2781", False),
("TQ3481", False),
("TQ3182", False),
("TQ3082", False),
("TQ3482", False),
("TQ2979", False),
("TQ2981", False),
("TQ2885", False),
("TQ2884", False),
("TQ3479", False),
("TQ3380", False),
("TQ3183", False),
("TQ2982", False),
("TQ2782", False),
("TQ3480", False),
("TQ3081", False),
("TQ2881", False),
("TQ2880", False),
("TQ3382", False),
("TQ3185", False),
("TQ3083", False),
("TQ2980", False),
("TQ2984", True),
("TQ2983", False),
("TQ2985", False),
("TQ3084", True),
("TQ3080", False),
("TQ3085", False),
("TQ3581", False),
("TQ3381", False),
],
),
]
@pytest.mark.parametrize(
"geometry,resolution,expected_bng",
PARAMETERS_MARKED_MULTIPOLYGONS,
ids=[
"multipoly_index_simple_1km",
"multipoly_index_merge_1km",
"multipoly_index_separate_1km",
],
)
def test__bng_multigeom_marked(
geometry: MultiPolygon,
resolution: int,
expected_bng: Sequence[str],
) -> None:
"""Returns set of expected British National Grid references."""
assert expected_bng.sort(key=lambda x: x[0]) == _bng_multigeom_marked(
geometry, resolution, pad=1
).sort(key=lambda x: x[0])
PARAMETER_WKT_FROM_BNG = [
(
"TQ",
"""POLYGON((500000 100000,
600000 100000,
600000 200000,
500000 200000,
500000 100000))""",
),
(
"TQ28",
"""POLYGON((520000 180000,
530000 180000,
530000 190000,
520000 190000,
520000 180000))""",
),
(
"TQ2984",
"""POLYGON((529000 184000,
530000 184000,
530000 185000,
529000 185000,
529000 184000))""",
),
(
"TQ295845",
"""POLYGON((529500 184500,
529600 184500,
529600 184600,
529500 184600,
529500 184500))""",
),
(
"TQ29548454",
"""POLYGON((529540 184540,
529550 184540,
529550 184550,
529540 184550,
529540 184540))""",
),
(
"TQ2954284542",
"""POLYGON((529542 184542,
529543 184542,
529543 184543,
529542 184543,
529542 184542))""",
),
]
@pytest.mark.parametrize(
"bng,expected_wkt",
PARAMETER_WKT_FROM_BNG,
ids=["bng_100km", "bng_10km", "bng_1km", "bng_100m", "bng_10m", "bng_1m"],
)
def test_wkt_from_bng(
bng: str,
expected_wkt: str,
) -> None:
"""Returns set of expected British National Grid references."""
assert sub(r"\s", "", expected_wkt) == sub(r"\s", "", wkt_from_bng(bng))
| 26.07966 | 87 | 0.361067 | 3,723 | 64,495 | 6.166532 | 0.130271 | 0.012937 | 0.006795 | 0.007318 | 0.770712 | 0.759561 | 0.743532 | 0.72576 | 0.710515 | 0.686384 | 0 | 0.402912 | 0.502613 | 64,495 | 2,472 | 88 | 26.09021 | 0.312759 | 0.009288 | 0 | 0.782518 | 0 | 0 | 0.207016 | 0.010954 | 0 | 0 | 0 | 0 | 0.004182 | 1 | 0.004182 | false | 0 | 0.002091 | 0 | 0.006274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
1c122dc1d85f8cb49fbd0e0e48688d0a9bd99a72 | 74 | py | Python | robotics/estimation/__init__.py | bkolligs/pyrobo | 341687cbed96f839fae682f9ec1c58524d7b35b4 | [
"MIT"
] | 1 | 2022-03-15T20:36:49.000Z | 2022-03-15T20:36:49.000Z | robotics/estimation/__init__.py | bkolligs/pyrobo | 341687cbed96f839fae682f9ec1c58524d7b35b4 | [
"MIT"
] | null | null | null | robotics/estimation/__init__.py | bkolligs/pyrobo | 341687cbed96f839fae682f9ec1c58524d7b35b4 | [
"MIT"
] | null | null | null | from .linear_kalman_filter import *
from .extended_kalman_filter import *
| 24.666667 | 37 | 0.837838 | 10 | 74 | 5.8 | 0.6 | 0.413793 | 0.62069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 74 | 2 | 38 | 37 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1c4ddcbb9ff2bee4a1823d533d1d3ead016af151 | 41 | py | Python | run.py | Satvinder31415/2048 | f9cf2e60ab3eb472ad6a5709c922fac1967a24d5 | [
"MIT"
] | 1 | 2019-07-19T12:18:09.000Z | 2019-07-19T12:18:09.000Z | run.py | Satvinder31415/2048 | f9cf2e60ab3eb472ad6a5709c922fac1967a24d5 | [
"MIT"
] | null | null | null | run.py | Satvinder31415/2048 | f9cf2e60ab3eb472ad6a5709c922fac1967a24d5 | [
"MIT"
] | null | null | null | import gui_2048
gui_2048.play_game()
| 10.25 | 21 | 0.756098 | 7 | 41 | 4 | 0.714286 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 0.170732 | 41 | 3 | 22 | 13.666667 | 0.588235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
1c5733f8f237a3d5ef1e52641663d78348cb6abc | 177 | py | Python | Aprendendo Python/cursopythonudamy/aula5.py | JlucasS777/Aprendendo-Python | a3a960260070f0d604c27fbbc41578a6ab11edb5 | [
"MIT"
] | null | null | null | Aprendendo Python/cursopythonudamy/aula5.py | JlucasS777/Aprendendo-Python | a3a960260070f0d604c27fbbc41578a6ab11edb5 | [
"MIT"
] | null | null | null | Aprendendo Python/cursopythonudamy/aula5.py | JlucasS777/Aprendendo-Python | a3a960260070f0d604c27fbbc41578a6ab11edb5 | [
"MIT"
] | null | null | null | #Operadores aritimedicos
'''+, -,*,/,//(divisão inteira ),** potênciação,% resto da divição,() procedencia
'''
print(20*10)
print(20*'LUC')
print(20/3)
print(20//3)
print(20%3)
| 19.666667 | 81 | 0.649718 | 24 | 177 | 4.791667 | 0.583333 | 0.304348 | 0.208696 | 0.226087 | 0.208696 | 0.208696 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0.096045 | 177 | 8 | 82 | 22.125 | 0.625 | 0.576271 | 0 | 0 | 0 | 0 | 0.044118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
1c75eb7e18c8c6d602bf19d902e65bea9d92b421 | 32,416 | py | Python | tests/test_vtapi3_files.py | drobotun/virustotalapi3 | 7d9e278dc58fa9702ddf8c62de09c852a7316148 | [
"MIT"
] | 8 | 2020-01-10T00:19:06.000Z | 2022-01-12T18:13:27.000Z | tests/test_vtapi3_files.py | drobotun/virustotalapi3 | 7d9e278dc58fa9702ddf8c62de09c852a7316148 | [
"MIT"
] | 2 | 2020-02-07T22:01:55.000Z | 2020-02-11T19:52:48.000Z | tests/test_vtapi3_files.py | drobotun/virustotalapi3 | 7d9e278dc58fa9702ddf8c62de09c852a7316148 | [
"MIT"
] | 3 | 2020-01-10T00:19:18.000Z | 2020-12-08T17:13:41.000Z | """
VirusTotalAPIFiles class testing module.
Author: Evgeny Drobotun (c) 2020
License: MIT (https://github.com/drobotun/virustotalapi3/blob/master/LICENSE)
"""
import unittest
import errno
from unittest import mock
import requests
from vtapi3 import VirusTotalAPIFiles, VirusTotalAPIError
def raise_file_not_found(file_path, type_access):
"""Mock function for implementing the FileNotFoundError exception."""
raise FileNotFoundError
def raise_permission_error(file_path, type_access):
"""Mock function for implementing the PermissionError exception."""
raise PermissionError
def raise_os_error(file_path, type_access):
"""Mock function for implementing the OSError exception."""
raise OSError
def post_mock_response_upload(status_code, content):
"""Mock function for implementing test responses from the server for upload() function."""
test_mock = mock.Mock()
test_mock.status_code = status_code
test_mock.content = content
def mock_response(api_url, headers, files, timeout, proxies):
return test_mock
return mock_response
def raise_connection_error_upload(api_url, headers, files, timeout, proxies):
"""Mock function for implementing the ConnectionError exception for upload() function."""
raise requests.exceptions.ConnectionError
def raise_timeout_error_upload(api_url, headers, files, timeout, proxies):
"""Mock function for implementing the Timeout exception for upload() function."""
raise requests.exceptions.Timeout
def get_mock_response(status_code, content):
"""Mock function for implementing test responses from the server."""
test_mock = mock.Mock()
test_mock.status_code = status_code
test_mock.content = content
def mock_response(api_url, headers, timeout, proxies):
return test_mock
return mock_response
def post_mock_response(status_code, content):
"""Mock function for implementing test responses from the server for analyse() function."""
test_mock = mock.Mock()
test_mock.status_code = status_code
test_mock.content = content
def mock_response(api_url, headers, timeout, proxies):
return test_mock
return mock_response
def raise_connection_error(api_url, headers, timeout, proxies):
"""Mock function for implementing the ConnectionError exception."""
raise requests.exceptions.ConnectionError
def raise_timeout_error(api_url, headers, timeout, proxies):
"""Mock function for implementing the Timeout exception."""
raise requests.exceptions.Timeout
def get_mock_response_comments(status_code, content):
"""Mock function for implementing test responses from the server for get_comments() function."""
test_mock = mock.Mock()
test_mock.status_code = status_code
test_mock.content = content
def mock_response(api_url, headers, params, timeout, proxies):
return test_mock
return mock_response
def raise_connection_error_get_comments(api_url, headers, params, timeout, proxies):
"""Mock function for implementing the ConnectionError exception for get_comments() function."""
raise requests.exceptions.ConnectionError
def raise_timeout_error_get_comments(api_url, headers, params, timeout, proxies):
"""Mock function for implementing the Timeout exception for for get_comments() function."""
raise requests.exceptions.Timeout
def post_mock_response_comments(status_code, content):
"""Mock function for implementing test responses from the server for put_comments() function."""
test_mock = mock.Mock()
test_mock.status_code = status_code
test_mock.content = content
def mock_response(api_url, headers, json, timeout, proxies):
return test_mock
return mock_response
def raise_connection_error_put_comments(api_url, headers, json, timeout, proxies):
"""Mock function for implementing the ConnectionError exception for put_comments() function."""
raise requests.exceptions.ConnectionError
def raise_timeout_error_put_comments(api_url, headers, json, timeout, proxies):
"""Mock function for implementing the Timeout exception for for put_comments() function."""
raise requests.exceptions.Timeout
def get_mock_response_votes(status_code, content):
"""Mock function for implementing test responses from the server for get_votes() function."""
test_mock = mock.Mock()
test_mock.status_code = status_code
test_mock.content = content
def mock_response(api_url, headers, params, timeout, proxies):
return test_mock
return mock_response
def post_mock_response_votes(status_code, content):
"""Mock function for implementing test responses from the server for put_votes() function."""
test_mock = mock.Mock()
test_mock.status_code = status_code
test_mock.content = content
def mock_response(api_url, headers, json, timeout, proxies):
return test_mock
return mock_response
def raise_connection_error_get_votes(api_url, headers, params, timeout, proxies):
"""Mock function for implementing the ConnectionError exception for get_votes() function."""
raise requests.exceptions.ConnectionError
def raise_timeout_error_get_votes(api_url, headers, params, timeout, proxies):
"""Mock function for implementing the Timeout exception for for get_votes() function."""
raise requests.exceptions.Timeout
def raise_connection_error_put_votes(api_url, headers, json, timeout, proxies):
"""Mock function for implementing the ConnectionError exception for put_votes() function."""
raise requests.exceptions.ConnectionError
def raise_timeout_error_put_votes(api_url, headers, json, timeout, proxies):
"""Mock function for implementing the Timeout exception for for put_votes() function."""
raise requests.exceptions.Timeout
def get_mock_response_relationship(status_code, content):
"""Mock function for implementing test responses from the server for get_relationship() function."""
test_mock = mock.Mock()
test_mock.status_code = status_code
test_mock.content = content
def mock_response(api_url, headers, params, timeout, proxies):
return test_mock
return mock_response
def raise_connection_error_get_relationship(api_url, headers, params, timeout, proxies):
"""Mock function for implementing the ConnectionError exception for get_relationship() function."""
raise requests.exceptions.ConnectionError
def raise_timeout_error_get_relationship(api_url, headers, params, timeout, proxies):
"""Mock function for implementing the Timeout exception for for get_relationship() function."""
raise requests.exceptions.Timeout
class TestFile(unittest.TestCase):
"""The class that implements the VirusTotalAPIFiles class testing functions."""
@mock.patch('builtins.open', mock.mock_open(read_data=b'This is a test file for VirusTotal API validation'))
def test_get_file_id_sha256(self):
TEST_SHA256 = '9b54bb6ed1c5574aeb5343b0c5e9686ab4b68c65bc2b5d408b7ed16499878ad8'
file_id = VirusTotalAPIFiles.get_file_id('')
self.assertEqual(file_id, TEST_SHA256)
@mock.patch('builtins.open', mock.mock_open(read_data=b'This is a test file for VirusTotal API validation'))
def test_get_file_id_sha1(self):
TEST_SHA1 = '668a6444cd1e22c70919dd8ff8d871be86944d68'
file_id = VirusTotalAPIFiles.get_file_id('', 'sha1')
self.assertEqual(file_id, TEST_SHA1)
@mock.patch('builtins.open', mock.mock_open(read_data=b'This is a test file for VirusTotal API validation'))
def test_get_file_id_md5(self):
TEST_MD5 = 'e4b681fbffdde3e3f289d39916af6042'
file_id = VirusTotalAPIFiles.get_file_id('', 'md5')
self.assertEqual(file_id, TEST_MD5)
@mock.patch('builtins.open', raise_file_not_found)
def test_get_file_id_file_error(self):
err_code = 0
try:
VirusTotalAPIFiles.get_file_id('test_file')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ENOENT)
@mock.patch('builtins.open', raise_permission_error)
def test_get_file_id_permission_error(self):
err_code = 0
try:
VirusTotalAPIFiles.get_file_id('test_file')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.EPERM)
@mock.patch('builtins.open', raise_os_error)
def test_get_file_id_io_error(self):
err_code = 0
try:
VirusTotalAPIFiles.get_file_id('test_file')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.EIO)
@mock.patch('builtins.open', mock.mock_open(read_data=b'This is a test file for VirusTotal API validation'))
@mock.patch('requests.post', post_mock_response_upload(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_upload(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.upload('')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('builtins.open', raise_file_not_found)
def test_upload_file_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.upload('test_file')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ENOENT)
@mock.patch('builtins.open', raise_permission_error)
def test_upload_permission_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.upload('test_file')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.EPERM)
@mock.patch('builtins.open', raise_os_error)
def test_upload_io_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.upload('test_file')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.EIO)
@mock.patch('builtins.open', mock.mock_open(read_data=b'This is a test file for VirusTotal API validation'))
@mock.patch('requests.post', raise_timeout_error_upload)
def test_upload_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.upload('')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('builtins.open', mock.mock_open(read_data=b'This is a test file for VirusTotal API validation'))
@mock.patch('requests.post', raise_connection_error_upload)
def test_upload_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.upload('')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('builtins.open', mock.mock_open(read_data=b'This is a test file for VirusTotal API validation'))
@mock.patch('requests.post', post_mock_response_upload(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_upload_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.upload('')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.get', get_mock_response(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_get_upload_url(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_upload_url()
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.get', raise_timeout_error)
def test_get_upload_url_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_upload_url()
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.get', raise_connection_error)
def test_get_upload_url_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_upload_url()
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.get', get_mock_response(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_get_upload_url_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.get_upload_url()
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.get', get_mock_response(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_get_report_id(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_report('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.get', get_mock_response(requests.codes['not_found'],
'Test VirusTotal contetnt'))
def test_get_report_wrong_id(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_report('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_NOT_FOUND_ERROR)
@mock.patch('requests.get', raise_timeout_error)
def test_get_report_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_report('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.get', raise_connection_error)
def test_get_report_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_report('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.get', get_mock_response(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_get_report_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.get_report('test_id')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.post', post_mock_response(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_analyse(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.analyse('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.post', post_mock_response(requests.codes['not_found'],
'Test VirusTotal contetnt'))
def test_analyse_wrong_id(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.analyse('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_NOT_FOUND_ERROR)
@mock.patch('requests.post', raise_timeout_error)
def test_analyse_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.analyse('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.post', raise_connection_error)
def test_analyse_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.analyse('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.post', post_mock_response(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_analyse_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.analyse('test_id')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.get', get_mock_response_comments(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_get_comments(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_comments('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.get', get_mock_response_comments(requests.codes['not_found'],
'Test VirusTotal contetnt'))
def test_get_comments_wrong_id(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_comments('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_NOT_FOUND_ERROR)
@mock.patch('requests.get', raise_timeout_error_get_comments)
def test_get_comments_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_comments('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.get', raise_connection_error_get_comments)
def test_get_comments_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_comments('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.get', get_mock_response_comments(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_get_comments_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.get_comments('test_id')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.post', post_mock_response_comments(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_put_comments(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.put_comments('test_id', 'test_comments')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.post', raise_timeout_error_put_comments)
def test_put_comments_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.put_comments('test_id', 'test_comments')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.post', raise_connection_error_put_comments)
def test_put_comments_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.put_comments('test_id', 'test_comments')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.post', post_mock_response_comments(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_put_comments_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.put_comments('test_id', 'test_comments')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.get', get_mock_response_votes(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_get_votes(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_votes('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.get', get_mock_response_votes(requests.codes['not_found'],
'Test VirusTotal contetnt'))
def test_get_votes_wrong_id(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_votes('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_NOT_FOUND_ERROR)
@mock.patch('requests.get', raise_timeout_error_get_votes)
def test_get_votes_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_votes('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.get', raise_connection_error_get_votes)
def test_get_votes_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_votes('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.get', get_mock_response_votes(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_get_votes_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.get_votes('test_id')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.post', post_mock_response_votes(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_put_votes_harmless(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.put_votes('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.post', post_mock_response_votes(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_put_votes_malicious(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.put_votes('test_id', True)
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.post', raise_timeout_error_put_votes)
def test_put_votes_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.put_votes('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.post', raise_connection_error_put_votes)
def test_put_votes_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.put_votes('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.post', post_mock_response_votes(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_put_votes_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.put_votes('test_id')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.get', get_mock_response_relationship(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_get_relationship(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_relationship('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.get', get_mock_response_relationship(requests.codes['not_found'],
'Test VirusTotal contetnt'))
def test_get_relationship_wrong_id(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_relationship('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_NOT_FOUND_ERROR)
@mock.patch('requests.get', raise_timeout_error_get_relationship)
def test_get_relationship_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_relationship('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.get', raise_connection_error_get_relationship)
def test_get_relationship_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_relationship('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.get', get_mock_response_relationship(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_get_relationship_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.get_relationship('test_id')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.get', get_mock_response(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_get_behaviours(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_behaviours('test_sandbox_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.get', get_mock_response(requests.codes['not_found'],
'Test VirusTotal contetnt'))
def test_get_behaviours_wrong_id(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_behaviours('test_sandbox_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_NOT_FOUND_ERROR)
@mock.patch('requests.get', raise_timeout_error)
def test_get_behaviours_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_behaviours('test_sandbox_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.get', raise_connection_error)
def test_get_behaviours_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_behaviours('test_sandbox_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.get', get_mock_response(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_get_behaviours_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.get_behaviours('test_sandbox_id')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.get', get_mock_response(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_get_download_url(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_download_url('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.get', get_mock_response(requests.codes['not_found'],
'Test VirusTotal contetnt'))
def test_get_download_url_wrong_id(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_download_url('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_NOT_FOUND_ERROR)
@mock.patch('requests.get', raise_timeout_error)
def test_get_download_url_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_download_url('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.get', raise_connection_error)
def test_get_download_url_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_download_url('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.get', get_mock_response(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_get_download_url_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.get_download_url('test_id')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
@mock.patch('requests.get', get_mock_response(requests.codes['ok'],
'Test VirusTotal contetnt'))
def test_get_download(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_download('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_OK)
@mock.patch('requests.get', get_mock_response(requests.codes['not_found'],
'Test VirusTotal contetnt'))
def test_get_download_wrong_id(self):
vt_files = VirusTotalAPIFiles('test_api_key')
vt_files.get_download('test_id')
http_err = vt_files.get_last_http_error()
self.assertEqual(http_err, vt_files.HTTP_NOT_FOUND_ERROR)
@mock.patch('requests.get', raise_timeout_error)
def test_get_download_timeout_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_download('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ETIMEDOUT)
@mock.patch('requests.get', raise_connection_error)
def test_get_download_connection_error(self):
err_code = 0
vt_files = VirusTotalAPIFiles('test_api_key')
try:
vt_files.get_download('test_id')
except VirusTotalAPIError as err:
err_code = err.err_code
self.assertEqual(err_code, errno.ECONNABORTED)
@mock.patch('requests.get', get_mock_response(requests.codes['unauthorized'],
'Test VirusTotal contetnt'))
def test_get_download_wrong_api_key(self):
vt_files_wrong_api_key = VirusTotalAPIFiles('test_api_key')
vt_files_wrong_api_key.get_download('test_id')
http_err = vt_files_wrong_api_key.get_last_http_error()
self.assertEqual(http_err, vt_files_wrong_api_key.HTTP_AUTHENTICATION_REQUIRED_ERROR)
if __name__ == '__main__':
unittest.main()
| 42.708827 | 112 | 0.709927 | 4,192 | 32,416 | 5.109017 | 0.030534 | 0.060793 | 0.027735 | 0.043143 | 0.961573 | 0.951114 | 0.942476 | 0.919363 | 0.90685 | 0.885138 | 0 | 0.00528 | 0.199593 | 32,416 | 758 | 113 | 42.765172 | 0.820165 | 0.069256 | 0 | 0.746341 | 0 | 0 | 0.120175 | 0.00453 | 0 | 0 | 0 | 0 | 0.107317 | 1 | 0.160976 | false | 0 | 0.00813 | 0.013008 | 0.196748 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
98c6eaf0e01ae998de30f3478174b76a256d41fc | 9,306 | py | Python | Keras_tensorflow_nightly/source2.7/tensorflow/tools/api/generator/api/keras/backend/__init__.py | Con-Mi/lambda-packs | b23a8464abdd88050b83310e1d0e99c54dac28ab | [
"MIT"
] | 3 | 2019-04-01T11:03:04.000Z | 2019-12-31T02:17:15.000Z | Keras_tensorflow_nightly/source2.7/tensorflow/tools/api/generator/api/keras/backend/__init__.py | Con-Mi/lambda-packs | b23a8464abdd88050b83310e1d0e99c54dac28ab | [
"MIT"
] | 1 | 2021-04-15T18:46:45.000Z | 2021-04-15T18:46:45.000Z | Keras_tensorflow_nightly/source2.7/tensorflow/tools/api/generator/api/keras/backend/__init__.py | Con-Mi/lambda-packs | b23a8464abdd88050b83310e1d0e99c54dac28ab | [
"MIT"
] | 1 | 2021-09-23T13:43:07.000Z | 2021-09-23T13:43:07.000Z | """Imports for Python API.
This file is MACHINE GENERATED! Do not edit.
Generated by: tensorflow/tools/api/generator/create_python_api.py script.
"""
from tensorflow.python import name_scope
from tensorflow.python.keras._impl.keras.backend import abs
from tensorflow.python.keras._impl.keras.backend import all
from tensorflow.python.keras._impl.keras.backend import any
from tensorflow.python.keras._impl.keras.backend import arange
from tensorflow.python.keras._impl.keras.backend import argmax
from tensorflow.python.keras._impl.keras.backend import argmin
from tensorflow.python.keras._impl.keras.backend import backend
from tensorflow.python.keras._impl.keras.backend import batch_dot
from tensorflow.python.keras._impl.keras.backend import batch_flatten
from tensorflow.python.keras._impl.keras.backend import batch_get_value
from tensorflow.python.keras._impl.keras.backend import batch_normalization
from tensorflow.python.keras._impl.keras.backend import batch_set_value
from tensorflow.python.keras._impl.keras.backend import bias_add
from tensorflow.python.keras._impl.keras.backend import binary_crossentropy
from tensorflow.python.keras._impl.keras.backend import cast
from tensorflow.python.keras._impl.keras.backend import cast_to_floatx
from tensorflow.python.keras._impl.keras.backend import categorical_crossentropy
from tensorflow.python.keras._impl.keras.backend import clear_session
from tensorflow.python.keras._impl.keras.backend import clip
from tensorflow.python.keras._impl.keras.backend import concatenate
from tensorflow.python.keras._impl.keras.backend import constant
from tensorflow.python.keras._impl.keras.backend import conv1d
from tensorflow.python.keras._impl.keras.backend import conv2d
from tensorflow.python.keras._impl.keras.backend import conv2d_transpose
from tensorflow.python.keras._impl.keras.backend import conv3d
from tensorflow.python.keras._impl.keras.backend import cos
from tensorflow.python.keras._impl.keras.backend import count_params
from tensorflow.python.keras._impl.keras.backend import ctc_batch_cost
from tensorflow.python.keras._impl.keras.backend import ctc_decode
from tensorflow.python.keras._impl.keras.backend import ctc_label_dense_to_sparse
from tensorflow.python.keras._impl.keras.backend import dot
from tensorflow.python.keras._impl.keras.backend import dropout
from tensorflow.python.keras._impl.keras.backend import dtype
from tensorflow.python.keras._impl.keras.backend import elu
from tensorflow.python.keras._impl.keras.backend import epsilon
from tensorflow.python.keras._impl.keras.backend import equal
from tensorflow.python.keras._impl.keras.backend import eval
from tensorflow.python.keras._impl.keras.backend import exp
from tensorflow.python.keras._impl.keras.backend import expand_dims
from tensorflow.python.keras._impl.keras.backend import eye
from tensorflow.python.keras._impl.keras.backend import flatten
from tensorflow.python.keras._impl.keras.backend import floatx
from tensorflow.python.keras._impl.keras.backend import foldl
from tensorflow.python.keras._impl.keras.backend import foldr
from tensorflow.python.keras._impl.keras.backend import function
from tensorflow.python.keras._impl.keras.backend import gather
from tensorflow.python.keras._impl.keras.backend import get_session
from tensorflow.python.keras._impl.keras.backend import get_uid
from tensorflow.python.keras._impl.keras.backend import get_value
from tensorflow.python.keras._impl.keras.backend import gradients
from tensorflow.python.keras._impl.keras.backend import greater
from tensorflow.python.keras._impl.keras.backend import greater_equal
from tensorflow.python.keras._impl.keras.backend import hard_sigmoid
from tensorflow.python.keras._impl.keras.backend import image_data_format
from tensorflow.python.keras._impl.keras.backend import in_test_phase
from tensorflow.python.keras._impl.keras.backend import in_top_k
from tensorflow.python.keras._impl.keras.backend import in_train_phase
from tensorflow.python.keras._impl.keras.backend import int_shape
from tensorflow.python.keras._impl.keras.backend import is_sparse
from tensorflow.python.keras._impl.keras.backend import l2_normalize
from tensorflow.python.keras._impl.keras.backend import learning_phase
from tensorflow.python.keras._impl.keras.backend import less
from tensorflow.python.keras._impl.keras.backend import less_equal
from tensorflow.python.keras._impl.keras.backend import log
from tensorflow.python.keras._impl.keras.backend import manual_variable_initialization
from tensorflow.python.keras._impl.keras.backend import map_fn
from tensorflow.python.keras._impl.keras.backend import max
from tensorflow.python.keras._impl.keras.backend import maximum
from tensorflow.python.keras._impl.keras.backend import mean
from tensorflow.python.keras._impl.keras.backend import min
from tensorflow.python.keras._impl.keras.backend import minimum
from tensorflow.python.keras._impl.keras.backend import moving_average_update
from tensorflow.python.keras._impl.keras.backend import ndim
from tensorflow.python.keras._impl.keras.backend import normalize_batch_in_training
from tensorflow.python.keras._impl.keras.backend import not_equal
from tensorflow.python.keras._impl.keras.backend import one_hot
from tensorflow.python.keras._impl.keras.backend import ones
from tensorflow.python.keras._impl.keras.backend import ones_like
from tensorflow.python.keras._impl.keras.backend import permute_dimensions
from tensorflow.python.keras._impl.keras.backend import placeholder
from tensorflow.python.keras._impl.keras.backend import pool2d
from tensorflow.python.keras._impl.keras.backend import pool3d
from tensorflow.python.keras._impl.keras.backend import pow
from tensorflow.python.keras._impl.keras.backend import print_tensor
from tensorflow.python.keras._impl.keras.backend import prod
from tensorflow.python.keras._impl.keras.backend import random_binomial
from tensorflow.python.keras._impl.keras.backend import random_normal
from tensorflow.python.keras._impl.keras.backend import random_normal_variable
from tensorflow.python.keras._impl.keras.backend import random_uniform
from tensorflow.python.keras._impl.keras.backend import random_uniform_variable
from tensorflow.python.keras._impl.keras.backend import relu
from tensorflow.python.keras._impl.keras.backend import repeat
from tensorflow.python.keras._impl.keras.backend import repeat_elements
from tensorflow.python.keras._impl.keras.backend import reset_uids
from tensorflow.python.keras._impl.keras.backend import reshape
from tensorflow.python.keras._impl.keras.backend import resize_images
from tensorflow.python.keras._impl.keras.backend import resize_volumes
from tensorflow.python.keras._impl.keras.backend import reverse
from tensorflow.python.keras._impl.keras.backend import rnn
from tensorflow.python.keras._impl.keras.backend import round
from tensorflow.python.keras._impl.keras.backend import separable_conv2d
from tensorflow.python.keras._impl.keras.backend import set_epsilon
from tensorflow.python.keras._impl.keras.backend import set_floatx
from tensorflow.python.keras._impl.keras.backend import set_image_data_format
from tensorflow.python.keras._impl.keras.backend import set_learning_phase
from tensorflow.python.keras._impl.keras.backend import set_session
from tensorflow.python.keras._impl.keras.backend import set_value
from tensorflow.python.keras._impl.keras.backend import shape
from tensorflow.python.keras._impl.keras.backend import sigmoid
from tensorflow.python.keras._impl.keras.backend import sign
from tensorflow.python.keras._impl.keras.backend import sin
from tensorflow.python.keras._impl.keras.backend import softmax
from tensorflow.python.keras._impl.keras.backend import softplus
from tensorflow.python.keras._impl.keras.backend import softsign
from tensorflow.python.keras._impl.keras.backend import sparse_categorical_crossentropy
from tensorflow.python.keras._impl.keras.backend import spatial_2d_padding
from tensorflow.python.keras._impl.keras.backend import spatial_3d_padding
from tensorflow.python.keras._impl.keras.backend import sqrt
from tensorflow.python.keras._impl.keras.backend import square
from tensorflow.python.keras._impl.keras.backend import squeeze
from tensorflow.python.keras._impl.keras.backend import stack
from tensorflow.python.keras._impl.keras.backend import std
from tensorflow.python.keras._impl.keras.backend import stop_gradient
from tensorflow.python.keras._impl.keras.backend import sum
from tensorflow.python.keras._impl.keras.backend import switch
from tensorflow.python.keras._impl.keras.backend import tanh
from tensorflow.python.keras._impl.keras.backend import temporal_padding
from tensorflow.python.keras._impl.keras.backend import to_dense
from tensorflow.python.keras._impl.keras.backend import transpose
from tensorflow.python.keras._impl.keras.backend import truncated_normal
from tensorflow.python.keras._impl.keras.backend import update
from tensorflow.python.keras._impl.keras.backend import update_add
from tensorflow.python.keras._impl.keras.backend import update_sub
from tensorflow.python.keras._impl.keras.backend import var
from tensorflow.python.keras._impl.keras.backend import variable
from tensorflow.python.keras._impl.keras.backend import zeros
from tensorflow.python.keras._impl.keras.backend import zeros_like | 65.076923 | 87 | 0.863421 | 1,350 | 9,306 | 5.782963 | 0.125185 | 0.24747 | 0.353529 | 0.438709 | 0.902267 | 0.902267 | 0.902267 | 0.902267 | 0.522992 | 0.107852 | 0 | 0.001145 | 0.061143 | 9,306 | 143 | 88 | 65.076923 | 0.892412 | 0.015366 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.007246 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 11 |
98e294770c3886e0169ba2b1133b5e1d3b4bb636 | 44 | py | Python | Lab1/Lab1-DNS_relay_final/utils/__init__.py | RabbitWhite1/USTC-Computer-Networks-Labs | efcad7b13690fdafd393383a83016772335f93d2 | [
"MIT"
] | null | null | null | Lab1/Lab1-DNS_relay_final/utils/__init__.py | RabbitWhite1/USTC-Computer-Networks-Labs | efcad7b13690fdafd393383a83016772335f93d2 | [
"MIT"
] | null | null | null | Lab1/Lab1-DNS_relay_final/utils/__init__.py | RabbitWhite1/USTC-Computer-Networks-Labs | efcad7b13690fdafd393383a83016772335f93d2 | [
"MIT"
] | null | null | null | from .cprint import *
from .binrep import *
| 14.666667 | 21 | 0.727273 | 6 | 44 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
c7576cfb37c2976710e16302e9799e2f18eecd5f | 90,023 | py | Python | src/pyrin/api/router/decorators.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/api/router/decorators.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/api/router/decorators.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
router decorators module.
"""
import pyrin.api.router.services as router_services
from pyrin.core.enumerations import HTTPMethodEnum
def api(url, methods=None, authenticated=True, permissions=None, **options):
"""
decorator to register an api handler for application.
this decorator could take all the options that are used in route initialization.
:param str url: the url rule as string.
:param str | tuple[str] methods: http methods that this rule should handle.
if not provided, defaults to `GET`.
:param bool authenticated: specifies that this route could not be accessed
if the requester has not been authenticated.
defaults to True if not provided.
:param PermissionBase | tuple[PermissionBase] permissions: all required permissions
for accessing this route.
:keyword str authenticator: the authenticator name to be used for this route.
if not provided, it will be get from rule based
authenticators if possible. otherwise the
`default_authenticator` config will be used.
if no default is set in `authentication` config
store, it raises an error.
it is only used if this route has `authenticated=True`.
:keyword bool fresh_auth: specifies that this route could not be accessed
if the requester has not a fresh authentication.
fresh authentication means an authentication that
has been done by providing user credentials to
server. defaults to False if not provided.
:keyword bool replace: specifies that this route must replace any existing
route with the same url and http methods or raise
an error if not provided. defaults to False.
:keyword int max_content_length: max content length that this route could handle,
in bytes. if not provided, it will be set to
`restricted_max_content_length` api config key.
note that this value should be lesser than or equal
to `max_content_length` api config key, otherwise
it will cause an error.
:keyword int status_code: status code to be returned on successful responses.
defaults to corresponding status code of request's
http method if not provided.
:note status_code: it could be a value from `InformationResponseCodeEnum`
or `SuccessfulResponseCodeEnum` or `RedirectionResponseCodeEnum`.
:keyword bool strict_status: specifies that it should only consider
the status code as processed if it is from
`InformationResponseCodeEnum` or
`SuccessfulResponseCodeEnum` or
`RedirectionResponseCodeEnum` values. otherwise
all codes from `INFORMATION_CODE_MIN`
to `INFORMATION_CODE_MAX` or from
`SUCCESS_CODE_MIN` to `SUCCESS_CODE_MAX`
or from `REDIRECTION_CODE_MIN` to
`REDIRECTION_CODE_MAX` will be considered
as processed. defaults to True if not provided.
:keyword str | list[str] environments: a list of all environments that this
route must be exposed on them.
the values could be from all available
environments in environments config store.
for example: `production`, `development`.
if not provided, the route will be exposed
on all environments.
:keyword ResultSchema | type[ResultSchema] result_schema: result schema to be used
to filter results. it could
be an instance or a type
of `ResultSchema` class.
:keyword bool indexed: specifies that list results must
include an extra field as row index.
the name of the index field and the initial value
of index could be provided by `index_name` and
`start_index` respectively. `indexed` keyword has
only effect if the returning result contains a list
of objects.
:keyword str index_name: name of the extra field to contain
the row index of each result. if not provided
defaults to `row_num` value.
:keyword int start_index: the initial value of row index. if not
provided, starts from 1.
:keyword SECURE_TRUE | SECURE_FALSE readable: specifies that any column or attribute
which has `allow_read=False` or its name
starts with underscore `_`, should not
be included in result dict. defaults to
`SECURE_TRUE` if not provided. it will
be used only for entity conversion. this
value will override the corresponding
value of `result_schema` if provided.
:keyword int depth: a value indicating the depth for conversion.
for example if entity A has a relationship with
entity B and there is a list of B in A, if `depth=0`
is provided, then just columns of A will be available
in result dict, but if `depth=1` is provided, then all
B entities in A will also be included in the result dict.
actually, `depth` specifies that relationships in an
entity should be followed by how much depth.
note that, if `columns` is also provided, it is required to
specify relationship property names in provided columns.
otherwise they won't be included even if `depth` is provided.
defaults to `default_depth` value of database config store.
please be careful on increasing `depth`, it could fail
application if set to higher values. choose it wisely.
normally the maximum acceptable `depth` would be 2 or 3.
there is a hard limit for max valid `depth` which is set
in `ConverterMixin.MAX_DEPTH` class variable. providing higher
`depth` value than this limit, will cause an error.
it will be used only for entity conversion.
this value will override the corresponding value of
`result_schema` if provided.
:keyword bool no_cache: a value indicating that the response returning from this route
must have a `Cache-Control: no-cache` header. this header will
be automatically added. defaults to False if not provided.
:keyword int request_limit: number of allowed requests to this
route before it unregisters itself.
defaults to None if not Provided.
:keyword int lifetime: number of seconds in which this route must remain
responsive after initial registration. after this
period, the route will unregister itself.
defaults to None if not provided.
:note request_limit, lifetime: if both of these values are provided, this
route will unregister itself if any of
these two conditions are met.
:keyword bool paged: specifies that this route should return paginated results.
defaults to False if not provided.
:keyword int page_size: default page size for this route.
defaults to `default_page_size` from
`database` config store if not provided.
:keyword int max_page_size: maximum page size that client is allowed
to request for this route. defaults to
`max_page_size` from `database` configs store
if not provided.
:keyword bool cors_enabled: specifies that cross origin resource sharing is enabled.
if not provided, it will be get from cors config store.
:keyword bool cors_always_send: specifies that cors headers must be included in
response even if the request does not have origin header.
if not provided, it will be get from cors config store.
:keyword list[str] cors_allowed_origins: a list of extra allowed origins to be used
in conjunction with default allowed ones.
:keyword list[str] cors_exposed_headers: extra exposed headers to be combined
with default ones.
:keyword list[str] cors_allowed_headers: extra allowed headers to be combined
with default ones.
:keyword bool cors_allow_credentials: specifies that browsers are allowed to pass
response headers to front-end javascript code
if the route is authenticated.
if not provided, it will be get from cors config
store.
:keyword int cors_max_age: maximum number of seconds to cache results.
if not provided, it will be get from cors config store.
:keyword bool swagger: specifies that this route must be exposed on swagger.
defaults to True if not provided.
:keyword bool ordered: specifies that this route provides ordered results.
this is a flag to be used by swagger package to add
`order_by` keyword into parameters.
defaults to False if not provided.
:keyword bool provide_automatic_options: controls whether the `OPTIONS` method should be
added automatically.
this can also be controlled by setting the
`view_func.provide_automatic_options = False`
before adding the rule.
:keyword dict defaults: an optional dict with defaults for other rules with the
same endpoint. this is a bit tricky but useful if you
want to have unique urls.
:keyword str subdomain: the subdomain rule string for this rule. If not specified the
rule only matches for the `default_subdomain` of the map. if
the map is not bound to a subdomain this feature is disabled.
:keyword bool strict_slashes: override the `Map` setting for `strict_slashes` only for
this rule. if not specified the `Map` setting is used.
:keyword bool merge_slashes: override `Map.merge_slashes` for this rule.
:keyword bool build_only: set this to True and the rule will never match but will
create a url that can be build. this is useful if you have
resources on a subdomain or folder that are not handled by
the WSGI application (like static data)
:keyword str | callable redirect_to: if given this must be either a string
or callable. in case of a callable it's
called with the url adapter that
triggered the match and the values
of the url as keyword arguments and has
to return the target for the redirect,
otherwise it has to be a string with
placeholders in rule syntax.
:keyword bool alias: if enabled this rule serves as an alias for another rule with
the same endpoint and arguments.
:keyword str host: if provided and the url map has host matching enabled this can be
used to provide a match rule for the whole host. this also means
that the subdomain feature is disabled.
:keyword bool websocket: if set to True, this rule is only matches for
websocket (`ws://`, `wss://`) requests. by default,
rules will only match for http requests.
defaults to False if not provided.
:raises DuplicateRouteURLError: duplicate route url error.
:raises OverwritingEndpointIsNotAllowedError: overwriting endpoint is not allowed error.
:raises PageSizeLimitError: page size limit error.
:raises MaxContentLengthLimitMismatchError: max content length limit mismatch error.
:raises InvalidViewFunctionTypeError: invalid view function type error.
:raises InvalidResultSchemaTypeError: invalid result schema type error.
:raises InvalidResponseStatusCodeError: invalid response status code error.
:rtype: function
"""
def decorator(func):
"""
decorates the given function and registers it as an api handler.
:param function func: function to register it as an api handler.
:rtype: function
"""
router_services.add_route(url, view_func=func, methods=methods,
authenticated=authenticated,
permissions=permissions, **options)
return func
return decorator
def post(url, authenticated=True, permissions=None, **options):
"""
decorator to register an application api handler for `POST` http method.
this decorator could take all the options that are used in route
initialization except the `methods`, which will be set to `POST`.
:param str url: the url rule as string.
:param bool authenticated: specifies that this route could not be accessed
if the requester has not been authenticated.
defaults to True if not provided.
:param PermissionBase | tuple[PermissionBase] permissions: all required permissions
for accessing this route.
:keyword str authenticator: the authenticator name to be used for this route.
if not provided, it will be get from rule based
authenticators if possible. otherwise the
`default_authenticator` config will be used.
if no default is set in `authentication` config
store, it raises an error.
it is only used if this route has `authenticated=True`.
:keyword bool fresh_auth: specifies that this route could not be accessed
if the requester has not a fresh authentication.
fresh authentication means an authentication that
has been done by providing user credentials to
server. defaults to False if not provided.
:keyword bool replace: specifies that this route must replace any existing
route with the same url and http methods or raise
an error if not provided. defaults to False.
:keyword int max_content_length: max content length that this route could handle,
in bytes. if not provided, it will be set to
`restricted_max_content_length` api config key.
note that this value should be lesser than or equal
to `max_content_length` api config key, otherwise
it will cause an error.
:keyword int status_code: status code to be returned on successful responses.
defaults to corresponding status code of request's
http method if not provided.
:note status_code: it could be a value from `InformationResponseCodeEnum`
or `SuccessfulResponseCodeEnum` or `RedirectionResponseCodeEnum`.
:keyword bool strict_status: specifies that it should only consider
the status code as processed if it is from
`InformationResponseCodeEnum` or
`SuccessfulResponseCodeEnum` or
`RedirectionResponseCodeEnum` values. otherwise
all codes from `INFORMATION_CODE_MIN`
to `INFORMATION_CODE_MAX` or from
`SUCCESS_CODE_MIN` to `SUCCESS_CODE_MAX`
or from `REDIRECTION_CODE_MIN` to
`REDIRECTION_CODE_MAX` will be considered
as processed. defaults to True if not provided.
:keyword str | list[str] environments: a list of all environments that this
route must be exposed on them.
the values could be from all available
environments in environments config store.
for example: `production`, `development`.
if not provided, the route will be exposed
on all environments.
:keyword ResultSchema | type[ResultSchema] result_schema: result schema to be used
to filter results. it could
be an instance or a type
of `ResultSchema` class.
:keyword bool indexed: specifies that list results must
include an extra field as row index.
the name of the index field and the initial value
of index could be provided by `index_name` and
`start_index` respectively. `indexed` keyword has
only effect if the returning result contains a list
of objects.
:keyword str index_name: name of the extra field to contain
the row index of each result. if not provided
defaults to `row_num` value.
:keyword int start_index: the initial value of row index. if not
provided, starts from 1.
:keyword SECURE_TRUE | SECURE_FALSE readable: specifies that any column or attribute
which has `allow_read=False` or its name
starts with underscore `_`, should not
be included in result dict. defaults to
`SECURE_TRUE` if not provided. it will
be used only for entity conversion. this
value will override the corresponding
value of `result_schema` if provided.
:keyword int depth: a value indicating the depth for conversion.
for example if entity A has a relationship with
entity B and there is a list of B in A, if `depth=0`
is provided, then just columns of A will be available
in result dict, but if `depth=1` is provided, then all
B entities in A will also be included in the result dict.
actually, `depth` specifies that relationships in an
entity should be followed by how much depth.
note that, if `columns` is also provided, it is required to
specify relationship property names in provided columns.
otherwise they won't be included even if `depth` is provided.
defaults to `default_depth` value of database config store.
please be careful on increasing `depth`, it could fail
application if set to higher values. choose it wisely.
normally the maximum acceptable `depth` would be 2 or 3.
there is a hard limit for max valid `depth` which is set
in `ConverterMixin.MAX_DEPTH` class variable. providing higher
`depth` value than this limit, will cause an error.
it will be used only for entity conversion.
this value will override the corresponding value of
`result_schema` if provided.
:keyword bool no_cache: a value indicating that the response returning from this route
must have a `Cache-Control: no-cache` header. this header will
be automatically added. defaults to False if not provided.
:keyword int request_limit: number of allowed requests to this
route before it unregisters itself.
defaults to None if not Provided.
:keyword int lifetime: number of seconds in which this route must remain
responsive after initial registration. after this
period, the route will unregister itself.
defaults to None if not provided.
:note request_limit, lifetime: if both of these values are provided, this
route will unregister itself if any of
these two conditions are met.
:keyword bool paged: specifies that this route should return paginated results.
defaults to False if not provided.
:keyword int page_size: default page size for this route.
defaults to `default_page_size` from
`database` config store if not provided.
:keyword int max_page_size: maximum page size that client is allowed
to request for this route. defaults to
`max_page_size` from `database` configs store
if not provided.
:keyword bool cors_enabled: specifies that cross origin resource sharing is enabled.
if not provided, it will be get from cors config store.
:keyword bool cors_always_send: specifies that cors headers must be included in
response even if the request does not have origin header.
if not provided, it will be get from cors config store.
:keyword list[str] cors_allowed_origins: a list of extra allowed origins to be used
in conjunction with default allowed ones.
:keyword list[str] cors_exposed_headers: extra exposed headers to be combined
with default ones.
:keyword list[str] cors_allowed_headers: extra allowed headers to be combined
with default ones.
:keyword bool cors_allow_credentials: specifies that browsers are allowed to pass
response headers to front-end javascript code
if the route is authenticated.
if not provided, it will be get from cors config
store.
:keyword int cors_max_age: maximum number of seconds to cache results.
if not provided, it will be get from cors config store.
:keyword bool swagger: specifies that this route must be exposed on swagger.
defaults to True if not provided.
:keyword bool ordered: specifies that this route provides ordered results.
this is a flag to be used by swagger package to add
`order_by` keyword into parameters.
defaults to False if not provided.
:keyword bool provide_automatic_options: controls whether the `OPTIONS` method should be
added automatically.
this can also be controlled by setting the
`view_func.provide_automatic_options = False`
before adding the rule.
:keyword dict defaults: an optional dict with defaults for other rules with the
same endpoint. this is a bit tricky but useful if you
want to have unique urls.
:keyword str subdomain: the subdomain rule string for this rule. If not specified the
rule only matches for the `default_subdomain` of the map. if
the map is not bound to a subdomain this feature is disabled.
:keyword bool strict_slashes: override the `Map` setting for `strict_slashes` only for
this rule. if not specified the `Map` setting is used.
:keyword bool merge_slashes: override `Map.merge_slashes` for this rule.
:keyword bool build_only: set this to True and the rule will never match but will
create a url that can be build. this is useful if you have
resources on a subdomain or folder that are not handled by
the WSGI application (like static data)
:keyword str | callable redirect_to: if given this must be either a string
or callable. in case of a callable it's
called with the url adapter that
triggered the match and the values
of the url as keyword arguments and has
to return the target for the redirect,
otherwise it has to be a string with
placeholders in rule syntax.
:keyword bool alias: if enabled this rule serves as an alias for another rule with
the same endpoint and arguments.
:keyword str host: if provided and the url map has host matching enabled this can be
used to provide a match rule for the whole host. this also means
that the subdomain feature is disabled.
:keyword bool websocket: if set to True, this rule is only matches for
websocket (`ws://`, `wss://`) requests. by default,
rules will only match for http requests.
defaults to False if not provided.
:raises DuplicateRouteURLError: duplicate route url error.
:raises OverwritingEndpointIsNotAllowedError: overwriting endpoint is not allowed error.
:raises PageSizeLimitError: page size limit error.
:raises MaxContentLengthLimitMismatchError: max content length limit mismatch error.
:raises InvalidViewFunctionTypeError: invalid view function type error.
:raises InvalidResultSchemaTypeError: invalid result schema type error.
:raises InvalidResponseStatusCodeError: invalid response status code error.
:rtype: function
"""
def decorator(func):
"""
decorates the given function and registers it as an api handler.
:param function func: function to register it as an api handler.
:rtype: function
"""
router_services.add_route(url, view_func=func,
methods=HTTPMethodEnum.POST,
authenticated=authenticated,
permissions=permissions, **options)
return func
return decorator
def patch(url, authenticated=True, permissions=None, **options):
"""
decorator to register an application api handler for `PATCH` http method.
this decorator could take all the options that are used in route
initialization except the `methods`, which will be set to `PATCH`.
:param str url: the url rule as string.
:param bool authenticated: specifies that this route could not be accessed
if the requester has not been authenticated.
defaults to True if not provided.
:param PermissionBase | tuple[PermissionBase] permissions: all required permissions
for accessing this route.
:keyword str authenticator: the authenticator name to be used for this route.
if not provided, it will be get from rule based
authenticators if possible. otherwise the
`default_authenticator` config will be used.
if no default is set in `authentication` config
store, it raises an error.
it is only used if this route has `authenticated=True`.
:keyword bool fresh_auth: specifies that this route could not be accessed
if the requester has not a fresh authentication.
fresh authentication means an authentication that
has been done by providing user credentials to
server. defaults to False if not provided.
:keyword bool replace: specifies that this route must replace any existing
route with the same url and http methods or raise
an error if not provided. defaults to False.
:keyword int max_content_length: max content length that this route could handle,
in bytes. if not provided, it will be set to
`restricted_max_content_length` api config key.
note that this value should be lesser than or equal
to `max_content_length` api config key, otherwise
it will cause an error.
:keyword int status_code: status code to be returned on successful responses.
defaults to corresponding status code of request's
http method if not provided.
:note status_code: it could be a value from `InformationResponseCodeEnum`
or `SuccessfulResponseCodeEnum` or `RedirectionResponseCodeEnum`.
:keyword bool strict_status: specifies that it should only consider
the status code as processed if it is from
`InformationResponseCodeEnum` or
`SuccessfulResponseCodeEnum` or
`RedirectionResponseCodeEnum` values. otherwise
all codes from `INFORMATION_CODE_MIN`
to `INFORMATION_CODE_MAX` or from
`SUCCESS_CODE_MIN` to `SUCCESS_CODE_MAX`
or from `REDIRECTION_CODE_MIN` to
`REDIRECTION_CODE_MAX` will be considered
as processed. defaults to True if not provided.
:keyword str | list[str] environments: a list of all environments that this
route must be exposed on them.
the values could be from all available
environments in environments config store.
for example: `production`, `development`.
if not provided, the route will be exposed
on all environments.
:keyword ResultSchema | type[ResultSchema] result_schema: result schema to be used
to filter results. it could
be an instance or a type
of `ResultSchema` class.
:keyword bool indexed: specifies that list results must
include an extra field as row index.
the name of the index field and the initial value
of index could be provided by `index_name` and
`start_index` respectively. `indexed` keyword has
only effect if the returning result contains a list
of objects.
:keyword str index_name: name of the extra field to contain
the row index of each result. if not provided
defaults to `row_num` value.
:keyword int start_index: the initial value of row index. if not
provided, starts from 1.
:keyword SECURE_TRUE | SECURE_FALSE readable: specifies that any column or attribute
which has `allow_read=False` or its name
starts with underscore `_`, should not
be included in result dict. defaults to
`SECURE_TRUE` if not provided. it will
be used only for entity conversion. this
value will override the corresponding
value of `result_schema` if provided.
:keyword int depth: a value indicating the depth for conversion.
for example if entity A has a relationship with
entity B and there is a list of B in A, if `depth=0`
is provided, then just columns of A will be available
in result dict, but if `depth=1` is provided, then all
B entities in A will also be included in the result dict.
actually, `depth` specifies that relationships in an
entity should be followed by how much depth.
note that, if `columns` is also provided, it is required to
specify relationship property names in provided columns.
otherwise they won't be included even if `depth` is provided.
defaults to `default_depth` value of database config store.
please be careful on increasing `depth`, it could fail
application if set to higher values. choose it wisely.
normally the maximum acceptable `depth` would be 2 or 3.
there is a hard limit for max valid `depth` which is set
in `ConverterMixin.MAX_DEPTH` class variable. providing higher
`depth` value than this limit, will cause an error.
it will be used only for entity conversion.
this value will override the corresponding value of
`result_schema` if provided.
:keyword bool no_cache: a value indicating that the response returning from this route
must have a `Cache-Control: no-cache` header. this header will
be automatically added. defaults to False if not provided.
:keyword int request_limit: number of allowed requests to this
route before it unregisters itself.
defaults to None if not Provided.
:keyword int lifetime: number of seconds in which this route must remain
responsive after initial registration. after this
period, the route will unregister itself.
defaults to None if not provided.
:note request_limit, lifetime: if both of these values are provided, this
route will unregister itself if any of
these two conditions are met.
:keyword bool paged: specifies that this route should return paginated results.
defaults to False if not provided.
:keyword int page_size: default page size for this route.
defaults to `default_page_size` from
`database` config store if not provided.
:keyword int max_page_size: maximum page size that client is allowed
to request for this route. defaults to
`max_page_size` from `database` configs store
if not provided.
:keyword bool cors_enabled: specifies that cross origin resource sharing is enabled.
if not provided, it will be get from cors config store.
:keyword bool cors_always_send: specifies that cors headers must be included in
response even if the request does not have origin header.
if not provided, it will be get from cors config store.
:keyword list[str] cors_allowed_origins: a list of extra allowed origins to be used
in conjunction with default allowed ones.
:keyword list[str] cors_exposed_headers: extra exposed headers to be combined
with default ones.
:keyword list[str] cors_allowed_headers: extra allowed headers to be combined
with default ones.
:keyword bool cors_allow_credentials: specifies that browsers are allowed to pass
response headers to front-end javascript code
if the route is authenticated.
if not provided, it will be get from cors config
store.
:keyword int cors_max_age: maximum number of seconds to cache results.
if not provided, it will be get from cors config store.
:keyword bool swagger: specifies that this route must be exposed on swagger.
defaults to True if not provided.
:keyword bool ordered: specifies that this route provides ordered results.
this is a flag to be used by swagger package to add
`order_by` keyword into parameters.
defaults to False if not provided.
:keyword bool provide_automatic_options: controls whether the `OPTIONS` method should be
added automatically.
this can also be controlled by setting the
`view_func.provide_automatic_options = False`
before adding the rule.
:keyword dict defaults: an optional dict with defaults for other rules with the
same endpoint. this is a bit tricky but useful if you
want to have unique urls.
:keyword str subdomain: the subdomain rule string for this rule. If not specified the
rule only matches for the `default_subdomain` of the map. if
the map is not bound to a subdomain this feature is disabled.
:keyword bool strict_slashes: override the `Map` setting for `strict_slashes` only for
this rule. if not specified the `Map` setting is used.
:keyword bool merge_slashes: override `Map.merge_slashes` for this rule.
:keyword bool build_only: set this to True and the rule will never match but will
create a url that can be build. this is useful if you have
resources on a subdomain or folder that are not handled by
the WSGI application (like static data)
:keyword str | callable redirect_to: if given this must be either a string
or callable. in case of a callable it's
called with the url adapter that
triggered the match and the values
of the url as keyword arguments and has
to return the target for the redirect,
otherwise it has to be a string with
placeholders in rule syntax.
:keyword bool alias: if enabled this rule serves as an alias for another rule with
the same endpoint and arguments.
:keyword str host: if provided and the url map has host matching enabled this can be
used to provide a match rule for the whole host. this also means
that the subdomain feature is disabled.
:keyword bool websocket: if set to True, this rule is only matches for
websocket (`ws://`, `wss://`) requests. by default,
rules will only match for http requests.
defaults to False if not provided.
:raises DuplicateRouteURLError: duplicate route url error.
:raises OverwritingEndpointIsNotAllowedError: overwriting endpoint is not allowed error.
:raises PageSizeLimitError: page size limit error.
:raises MaxContentLengthLimitMismatchError: max content length limit mismatch error.
:raises InvalidViewFunctionTypeError: invalid view function type error.
:raises InvalidResultSchemaTypeError: invalid result schema type error.
:raises InvalidResponseStatusCodeError: invalid response status code error.
:rtype: function
"""
def decorator(func):
"""
decorates the given function and registers it as an api handler.
:param function func: function to register it as an api handler.
:rtype: function
"""
router_services.add_route(url, view_func=func,
methods=HTTPMethodEnum.PATCH,
authenticated=authenticated,
permissions=permissions, **options)
return func
return decorator
def put(url, authenticated=True, permissions=None, **options):
"""
decorator to register an application api handler for `PUT` http method.
this decorator could take all the options that are used in route
initialization except the `methods`, which will be set to `PUT`.
:param str url: the url rule as string.
:param bool authenticated: specifies that this route could not be accessed
if the requester has not been authenticated.
defaults to True if not provided.
:param PermissionBase | tuple[PermissionBase] permissions: all required permissions
for accessing this route.
:keyword str authenticator: the authenticator name to be used for this route.
if not provided, it will be get from rule based
authenticators if possible. otherwise the
`default_authenticator` config will be used.
if no default is set in `authentication` config
store, it raises an error.
it is only used if this route has `authenticated=True`.
:keyword bool fresh_auth: specifies that this route could not be accessed
if the requester has not a fresh authentication.
fresh authentication means an authentication that
has been done by providing user credentials to
server. defaults to False if not provided.
:keyword bool replace: specifies that this route must replace any existing
route with the same url and http methods or raise
an error if not provided. defaults to False.
:keyword int max_content_length: max content length that this route could handle,
in bytes. if not provided, it will be set to
`restricted_max_content_length` api config key.
note that this value should be lesser than or equal
to `max_content_length` api config key, otherwise
it will cause an error.
:keyword int status_code: status code to be returned on successful responses.
defaults to corresponding status code of request's
http method if not provided.
:note status_code: it could be a value from `InformationResponseCodeEnum`
or `SuccessfulResponseCodeEnum` or `RedirectionResponseCodeEnum`.
:keyword bool strict_status: specifies that it should only consider
the status code as processed if it is from
`InformationResponseCodeEnum` or
`SuccessfulResponseCodeEnum` or
`RedirectionResponseCodeEnum` values. otherwise
all codes from `INFORMATION_CODE_MIN`
to `INFORMATION_CODE_MAX` or from
`SUCCESS_CODE_MIN` to `SUCCESS_CODE_MAX`
or from `REDIRECTION_CODE_MIN` to
`REDIRECTION_CODE_MAX` will be considered
as processed. defaults to True if not provided.
:keyword str | list[str] environments: a list of all environments that this
route must be exposed on them.
the values could be from all available
environments in environments config store.
for example: `production`, `development`.
if not provided, the route will be exposed
on all environments.
:keyword ResultSchema | type[ResultSchema] result_schema: result schema to be used
to filter results. it could
be an instance or a type
of `ResultSchema` class.
:keyword bool indexed: specifies that list results must
include an extra field as row index.
the name of the index field and the initial value
of index could be provided by `index_name` and
`start_index` respectively. `indexed` keyword has
only effect if the returning result contains a list
of objects.
:keyword str index_name: name of the extra field to contain
the row index of each result. if not provided
defaults to `row_num` value.
:keyword int start_index: the initial value of row index. if not
provided, starts from 1.
:keyword SECURE_TRUE | SECURE_FALSE readable: specifies that any column or attribute
which has `allow_read=False` or its name
starts with underscore `_`, should not
be included in result dict. defaults to
`SECURE_TRUE` if not provided. it will
be used only for entity conversion. this
value will override the corresponding
value of `result_schema` if provided.
:keyword int depth: a value indicating the depth for conversion.
for example if entity A has a relationship with
entity B and there is a list of B in A, if `depth=0`
is provided, then just columns of A will be available
in result dict, but if `depth=1` is provided, then all
B entities in A will also be included in the result dict.
actually, `depth` specifies that relationships in an
entity should be followed by how much depth.
note that, if `columns` is also provided, it is required to
specify relationship property names in provided columns.
otherwise they won't be included even if `depth` is provided.
defaults to `default_depth` value of database config store.
please be careful on increasing `depth`, it could fail
application if set to higher values. choose it wisely.
normally the maximum acceptable `depth` would be 2 or 3.
there is a hard limit for max valid `depth` which is set
in `ConverterMixin.MAX_DEPTH` class variable. providing higher
`depth` value than this limit, will cause an error.
it will be used only for entity conversion.
this value will override the corresponding value of
`result_schema` if provided.
:keyword bool no_cache: a value indicating that the response returning from this route
must have a `Cache-Control: no-cache` header. this header will
be automatically added. defaults to False if not provided.
:keyword int request_limit: number of allowed requests to this
route before it unregisters itself.
defaults to None if not Provided.
:keyword int lifetime: number of seconds in which this route must remain
responsive after initial registration. after this
period, the route will unregister itself.
defaults to None if not provided.
:note request_limit, lifetime: if both of these values are provided, this
route will unregister itself if any of
these two conditions are met.
:keyword bool paged: specifies that this route should return paginated results.
defaults to False if not provided.
:keyword int page_size: default page size for this route.
defaults to `default_page_size` from
`database` config store if not provided.
:keyword int max_page_size: maximum page size that client is allowed
to request for this route. defaults to
`max_page_size` from `database` configs store
if not provided.
:keyword bool cors_enabled: specifies that cross origin resource sharing is enabled.
if not provided, it will be get from cors config store.
:keyword bool cors_always_send: specifies that cors headers must be included in
response even if the request does not have origin header.
if not provided, it will be get from cors config store.
:keyword list[str] cors_allowed_origins: a list of extra allowed origins to be used
in conjunction with default allowed ones.
:keyword list[str] cors_exposed_headers: extra exposed headers to be combined
with default ones.
:keyword list[str] cors_allowed_headers: extra allowed headers to be combined
with default ones.
:keyword bool cors_allow_credentials: specifies that browsers are allowed to pass
response headers to front-end javascript code
if the route is authenticated.
if not provided, it will be get from cors config
store.
:keyword int cors_max_age: maximum number of seconds to cache results.
if not provided, it will be get from cors config store.
:keyword bool swagger: specifies that this route must be exposed on swagger.
defaults to True if not provided.
:keyword bool ordered: specifies that this route provides ordered results.
this is a flag to be used by swagger package to add
`order_by` keyword into parameters.
defaults to False if not provided.
:keyword bool provide_automatic_options: controls whether the `OPTIONS` method should be
added automatically.
this can also be controlled by setting the
`view_func.provide_automatic_options = False`
before adding the rule.
:keyword dict defaults: an optional dict with defaults for other rules with the
same endpoint. this is a bit tricky but useful if you
want to have unique urls.
:keyword str subdomain: the subdomain rule string for this rule. If not specified the
rule only matches for the `default_subdomain` of the map. if
the map is not bound to a subdomain this feature is disabled.
:keyword bool strict_slashes: override the `Map` setting for `strict_slashes` only for
this rule. if not specified the `Map` setting is used.
:keyword bool merge_slashes: override `Map.merge_slashes` for this rule.
:keyword bool build_only: set this to True and the rule will never match but will
create a url that can be build. this is useful if you have
resources on a subdomain or folder that are not handled by
the WSGI application (like static data)
:keyword str | callable redirect_to: if given this must be either a string
or callable. in case of a callable it's
called with the url adapter that
triggered the match and the values
of the url as keyword arguments and has
to return the target for the redirect,
otherwise it has to be a string with
placeholders in rule syntax.
:keyword bool alias: if enabled this rule serves as an alias for another rule with
the same endpoint and arguments.
:keyword str host: if provided and the url map has host matching enabled this can be
used to provide a match rule for the whole host. this also means
that the subdomain feature is disabled.
:keyword bool websocket: if set to True, this rule is only matches for
websocket (`ws://`, `wss://`) requests. by default,
rules will only match for http requests.
defaults to False if not provided.
:raises DuplicateRouteURLError: duplicate route url error.
:raises OverwritingEndpointIsNotAllowedError: overwriting endpoint is not allowed error.
:raises PageSizeLimitError: page size limit error.
:raises MaxContentLengthLimitMismatchError: max content length limit mismatch error.
:raises InvalidViewFunctionTypeError: invalid view function type error.
:raises InvalidResultSchemaTypeError: invalid result schema type error.
:raises InvalidResponseStatusCodeError: invalid response status code error.
:rtype: function
"""
def decorator(func):
"""
decorates the given function and registers it as an api handler.
:param function func: function to register it as an api handler.
:rtype: function
"""
router_services.add_route(url, view_func=func,
methods=HTTPMethodEnum.PUT,
authenticated=authenticated,
permissions=permissions, **options)
return func
return decorator
def get(url, authenticated=True, permissions=None, **options):
"""
decorator to register an application api handler for `GET` http method.
this decorator could take all the options that are used in route
initialization except the `methods`, which will be set to `GET`.
:param str url: the url rule as string.
:param bool authenticated: specifies that this route could not be accessed
if the requester has not been authenticated.
defaults to True if not provided.
:param PermissionBase | tuple[PermissionBase] permissions: all required permissions
for accessing this route.
:keyword str authenticator: the authenticator name to be used for this route.
if not provided, it will be get from rule based
authenticators if possible. otherwise the
`default_authenticator` config will be used.
if no default is set in `authentication` config
store, it raises an error.
it is only used if this route has `authenticated=True`.
:keyword bool fresh_auth: specifies that this route could not be accessed
if the requester has not a fresh authentication.
fresh authentication means an authentication that
has been done by providing user credentials to
server. defaults to False if not provided.
:keyword bool replace: specifies that this route must replace any existing
route with the same url and http methods or raise
an error if not provided. defaults to False.
:keyword int max_content_length: max content length that this route could handle,
in bytes. if not provided, it will be set to
`restricted_max_content_length` api config key.
note that this value should be lesser than or equal
to `max_content_length` api config key, otherwise
it will cause an error.
:keyword int status_code: status code to be returned on successful responses.
defaults to corresponding status code of request's
http method if not provided.
:note status_code: it could be a value from `InformationResponseCodeEnum`
or `SuccessfulResponseCodeEnum` or `RedirectionResponseCodeEnum`.
:keyword bool strict_status: specifies that it should only consider
the status code as processed if it is from
`InformationResponseCodeEnum` or
`SuccessfulResponseCodeEnum` or
`RedirectionResponseCodeEnum` values. otherwise
all codes from `INFORMATION_CODE_MIN`
to `INFORMATION_CODE_MAX` or from
`SUCCESS_CODE_MIN` to `SUCCESS_CODE_MAX`
or from `REDIRECTION_CODE_MIN` to
`REDIRECTION_CODE_MAX` will be considered
as processed. defaults to True if not provided.
:keyword str | list[str] environments: a list of all environments that this
route must be exposed on them.
the values could be from all available
environments in environments config store.
for example: `production`, `development`.
if not provided, the route will be exposed
on all environments.
:keyword ResultSchema | type[ResultSchema] result_schema: result schema to be used
to filter results. it could
be an instance or a type
of `ResultSchema` class.
:keyword bool indexed: specifies that list results must
include an extra field as row index.
the name of the index field and the initial value
of index could be provided by `index_name` and
`start_index` respectively. `indexed` keyword has
only effect if the returning result contains a list
of objects.
:keyword str index_name: name of the extra field to contain
the row index of each result. if not provided
defaults to `row_num` value.
:keyword int start_index: the initial value of row index. if not
provided, starts from 1.
:keyword SECURE_TRUE | SECURE_FALSE readable: specifies that any column or attribute
which has `allow_read=False` or its name
starts with underscore `_`, should not
be included in result dict. defaults to
`SECURE_TRUE` if not provided. it will
be used only for entity conversion. this
value will override the corresponding
value of `result_schema` if provided.
:keyword int depth: a value indicating the depth for conversion.
for example if entity A has a relationship with
entity B and there is a list of B in A, if `depth=0`
is provided, then just columns of A will be available
in result dict, but if `depth=1` is provided, then all
B entities in A will also be included in the result dict.
actually, `depth` specifies that relationships in an
entity should be followed by how much depth.
note that, if `columns` is also provided, it is required to
specify relationship property names in provided columns.
otherwise they won't be included even if `depth` is provided.
defaults to `default_depth` value of database config store.
please be careful on increasing `depth`, it could fail
application if set to higher values. choose it wisely.
normally the maximum acceptable `depth` would be 2 or 3.
there is a hard limit for max valid `depth` which is set
in `ConverterMixin.MAX_DEPTH` class variable. providing higher
`depth` value than this limit, will cause an error.
it will be used only for entity conversion.
this value will override the corresponding value of
`result_schema` if provided.
:keyword bool no_cache: a value indicating that the response returning from this route
must have a `Cache-Control: no-cache` header. this header will
be automatically added. defaults to False if not provided.
:keyword int request_limit: number of allowed requests to this
route before it unregisters itself.
defaults to None if not Provided.
:keyword int lifetime: number of seconds in which this route must remain
responsive after initial registration. after this
period, the route will unregister itself.
defaults to None if not provided.
:note request_limit, lifetime: if both of these values are provided, this
route will unregister itself if any of
these two conditions are met.
:keyword bool paged: specifies that this route should return paginated results.
defaults to False if not provided.
:keyword int page_size: default page size for this route.
defaults to `default_page_size` from
`database` config store if not provided.
:keyword int max_page_size: maximum page size that client is allowed
to request for this route. defaults to
`max_page_size` from `database` configs store
if not provided.
:keyword bool cors_enabled: specifies that cross origin resource sharing is enabled.
if not provided, it will be get from cors config store.
:keyword bool cors_always_send: specifies that cors headers must be included in
response even if the request does not have origin header.
if not provided, it will be get from cors config store.
:keyword list[str] cors_allowed_origins: a list of extra allowed origins to be used
in conjunction with default allowed ones.
:keyword list[str] cors_exposed_headers: extra exposed headers to be combined
with default ones.
:keyword list[str] cors_allowed_headers: extra allowed headers to be combined
with default ones.
:keyword bool cors_allow_credentials: specifies that browsers are allowed to pass
response headers to front-end javascript code
if the route is authenticated.
if not provided, it will be get from cors config
store.
:keyword int cors_max_age: maximum number of seconds to cache results.
if not provided, it will be get from cors config store.
:keyword bool swagger: specifies that this route must be exposed on swagger.
defaults to True if not provided.
:keyword bool ordered: specifies that this route provides ordered results.
this is a flag to be used by swagger package to add
`order_by` keyword into parameters.
defaults to False if not provided.
:keyword bool provide_automatic_options: controls whether the `OPTIONS` method should be
added automatically.
this can also be controlled by setting the
`view_func.provide_automatic_options = False`
before adding the rule.
:keyword dict defaults: an optional dict with defaults for other rules with the
same endpoint. this is a bit tricky but useful if you
want to have unique urls.
:keyword str subdomain: the subdomain rule string for this rule. If not specified the
rule only matches for the `default_subdomain` of the map. if
the map is not bound to a subdomain this feature is disabled.
:keyword bool strict_slashes: override the `Map` setting for `strict_slashes` only for
this rule. if not specified the `Map` setting is used.
:keyword bool merge_slashes: override `Map.merge_slashes` for this rule.
:keyword bool build_only: set this to True and the rule will never match but will
create a url that can be build. this is useful if you have
resources on a subdomain or folder that are not handled by
the WSGI application (like static data)
:keyword str | callable redirect_to: if given this must be either a string
or callable. in case of a callable it's
called with the url adapter that
triggered the match and the values
of the url as keyword arguments and has
to return the target for the redirect,
otherwise it has to be a string with
placeholders in rule syntax.
:keyword bool alias: if enabled this rule serves as an alias for another rule with
the same endpoint and arguments.
:keyword str host: if provided and the url map has host matching enabled this can be
used to provide a match rule for the whole host. this also means
that the subdomain feature is disabled.
:keyword bool websocket: if set to True, this rule is only matches for
websocket (`ws://`, `wss://`) requests. by default,
rules will only match for http requests.
defaults to False if not provided.
:raises DuplicateRouteURLError: duplicate route url error.
:raises OverwritingEndpointIsNotAllowedError: overwriting endpoint is not allowed error.
:raises PageSizeLimitError: page size limit error.
:raises MaxContentLengthLimitMismatchError: max content length limit mismatch error.
:raises InvalidViewFunctionTypeError: invalid view function type error.
:raises InvalidResultSchemaTypeError: invalid result schema type error.
:raises InvalidResponseStatusCodeError: invalid response status code error.
:rtype: function
"""
def decorator(func):
"""
decorates the given function and registers it as an api handler.
:param function func: function to register it as an api handler.
:rtype: function
"""
router_services.add_route(url, view_func=func,
methods=HTTPMethodEnum.GET,
authenticated=authenticated,
permissions=permissions, **options)
return func
return decorator
def delete(url, authenticated=True, permissions=None, **options):
"""
decorator to register an application api handler for `DELETE` http method.
this decorator could take all the options that are used in route
initialization except the `methods`, which will be set to `DELETE`.
:param str url: the url rule as string.
:param bool authenticated: specifies that this route could not be accessed
if the requester has not been authenticated.
defaults to True if not provided.
:param PermissionBase | tuple[PermissionBase] permissions: all required permissions
for accessing this route.
:keyword str authenticator: the authenticator name to be used for this route.
if not provided, it will be get from rule based
authenticators if possible. otherwise the
`default_authenticator` config will be used.
if no default is set in `authentication` config
store, it raises an error.
it is only used if this route has `authenticated=True`.
:keyword bool fresh_auth: specifies that this route could not be accessed
if the requester has not a fresh authentication.
fresh authentication means an authentication that
has been done by providing user credentials to
server. defaults to False if not provided.
:keyword bool replace: specifies that this route must replace any existing
route with the same url and http methods or raise
an error if not provided. defaults to False.
:keyword int max_content_length: max content length that this route could handle,
in bytes. if not provided, it will be set to
`restricted_max_content_length` api config key.
note that this value should be lesser than or equal
to `max_content_length` api config key, otherwise
it will cause an error.
:keyword int status_code: status code to be returned on successful responses.
defaults to corresponding status code of request's
http method if not provided.
:note status_code: it could be a value from `InformationResponseCodeEnum`
or `SuccessfulResponseCodeEnum` or `RedirectionResponseCodeEnum`.
:keyword bool strict_status: specifies that it should only consider
the status code as processed if it is from
`InformationResponseCodeEnum` or
`SuccessfulResponseCodeEnum` or
`RedirectionResponseCodeEnum` values. otherwise
all codes from `INFORMATION_CODE_MIN`
to `INFORMATION_CODE_MAX` or from
`SUCCESS_CODE_MIN` to `SUCCESS_CODE_MAX`
or from `REDIRECTION_CODE_MIN` to
`REDIRECTION_CODE_MAX` will be considered
as processed. defaults to True if not provided.
:keyword str | list[str] environments: a list of all environments that this
route must be exposed on them.
the values could be from all available
environments in environments config store.
for example: `production`, `development`.
if not provided, the route will be exposed
on all environments.
:keyword ResultSchema | type[ResultSchema] result_schema: result schema to be used
to filter results. it could
be an instance or a type
of `ResultSchema` class.
:keyword bool indexed: specifies that list results must
include an extra field as row index.
the name of the index field and the initial value
of index could be provided by `index_name` and
`start_index` respectively. `indexed` keyword has
only effect if the returning result contains a list
of objects.
:keyword str index_name: name of the extra field to contain
the row index of each result. if not provided
defaults to `row_num` value.
:keyword int start_index: the initial value of row index. if not
provided, starts from 1.
:keyword SECURE_TRUE | SECURE_FALSE readable: specifies that any column or attribute
which has `allow_read=False` or its name
starts with underscore `_`, should not
be included in result dict. defaults to
`SECURE_TRUE` if not provided. it will
be used only for entity conversion. this
value will override the corresponding
value of `result_schema` if provided.
:keyword int depth: a value indicating the depth for conversion.
for example if entity A has a relationship with
entity B and there is a list of B in A, if `depth=0`
is provided, then just columns of A will be available
in result dict, but if `depth=1` is provided, then all
B entities in A will also be included in the result dict.
actually, `depth` specifies that relationships in an
entity should be followed by how much depth.
note that, if `columns` is also provided, it is required to
specify relationship property names in provided columns.
otherwise they won't be included even if `depth` is provided.
defaults to `default_depth` value of database config store.
please be careful on increasing `depth`, it could fail
application if set to higher values. choose it wisely.
normally the maximum acceptable `depth` would be 2 or 3.
there is a hard limit for max valid `depth` which is set
in `ConverterMixin.MAX_DEPTH` class variable. providing higher
`depth` value than this limit, will cause an error.
it will be used only for entity conversion.
this value will override the corresponding value of
`result_schema` if provided.
:keyword bool no_cache: a value indicating that the response returning from this route
must have a `Cache-Control: no-cache` header. this header will
be automatically added. defaults to False if not provided.
:keyword int request_limit: number of allowed requests to this
route before it unregisters itself.
defaults to None if not Provided.
:keyword int lifetime: number of seconds in which this route must remain
responsive after initial registration. after this
period, the route will unregister itself.
defaults to None if not provided.
:note request_limit, lifetime: if both of these values are provided, this
route will unregister itself if any of
these two conditions are met.
:keyword bool paged: specifies that this route should return paginated results.
defaults to False if not provided.
:keyword int page_size: default page size for this route.
defaults to `default_page_size` from
`database` config store if not provided.
:keyword int max_page_size: maximum page size that client is allowed
to request for this route. defaults to
`max_page_size` from `database` configs store
if not provided.
:keyword bool cors_enabled: specifies that cross origin resource sharing is enabled.
if not provided, it will be get from cors config store.
:keyword bool cors_always_send: specifies that cors headers must be included in
response even if the request does not have origin header.
if not provided, it will be get from cors config store.
:keyword list[str] cors_allowed_origins: a list of extra allowed origins to be used
in conjunction with default allowed ones.
:keyword list[str] cors_exposed_headers: extra exposed headers to be combined
with default ones.
:keyword list[str] cors_allowed_headers: extra allowed headers to be combined
with default ones.
:keyword bool cors_allow_credentials: specifies that browsers are allowed to pass
response headers to front-end javascript code
if the route is authenticated.
if not provided, it will be get from cors config
store.
:keyword int cors_max_age: maximum number of seconds to cache results.
if not provided, it will be get from cors config store.
:keyword bool swagger: specifies that this route must be exposed on swagger.
defaults to True if not provided.
:keyword bool ordered: specifies that this route provides ordered results.
this is a flag to be used by swagger package to add
`order_by` keyword into parameters.
defaults to False if not provided.
:keyword bool provide_automatic_options: controls whether the `OPTIONS` method should be
added automatically.
this can also be controlled by setting the
`view_func.provide_automatic_options = False`
before adding the rule.
:keyword dict defaults: an optional dict with defaults for other rules with the
same endpoint. this is a bit tricky but useful if you
want to have unique urls.
:keyword str subdomain: the subdomain rule string for this rule. If not specified the
rule only matches for the `default_subdomain` of the map. if
the map is not bound to a subdomain this feature is disabled.
:keyword bool strict_slashes: override the `Map` setting for `strict_slashes` only for
this rule. if not specified the `Map` setting is used.
:keyword bool merge_slashes: override `Map.merge_slashes` for this rule.
:keyword bool build_only: set this to True and the rule will never match but will
create a url that can be build. this is useful if you have
resources on a subdomain or folder that are not handled by
the WSGI application (like static data)
:keyword str | callable redirect_to: if given this must be either a string
or callable. in case of a callable it's
called with the url adapter that
triggered the match and the values
of the url as keyword arguments and has
to return the target for the redirect,
otherwise it has to be a string with
placeholders in rule syntax.
:keyword bool alias: if enabled this rule serves as an alias for another rule with
the same endpoint and arguments.
:keyword str host: if provided and the url map has host matching enabled this can be
used to provide a match rule for the whole host. this also means
that the subdomain feature is disabled.
:keyword bool websocket: if set to True, this rule is only matches for
websocket (`ws://`, `wss://`) requests. by default,
rules will only match for http requests.
defaults to False if not provided.
:raises DuplicateRouteURLError: duplicate route url error.
:raises OverwritingEndpointIsNotAllowedError: overwriting endpoint is not allowed error.
:raises PageSizeLimitError: page size limit error.
:raises MaxContentLengthLimitMismatchError: max content length limit mismatch error.
:raises InvalidViewFunctionTypeError: invalid view function type error.
:raises InvalidResultSchemaTypeError: invalid result schema type error.
:raises InvalidResponseStatusCodeError: invalid response status code error.
:rtype: function
"""
def decorator(func):
"""
decorates the given function and registers it as an api handler.
:param function func: function to register it as an api handler.
:rtype: function
"""
router_services.add_route(url, view_func=func,
methods=HTTPMethodEnum.DELETE,
authenticated=authenticated,
permissions=permissions, **options)
return func
return decorator
| 58.229625 | 93 | 0.541762 | 9,633 | 90,023 | 5.014222 | 0.038929 | 0.016252 | 0.039025 | 0.022359 | 0.993541 | 0.993064 | 0.993064 | 0.993064 | 0.993064 | 0.989462 | 0 | 0.000602 | 0.428035 | 90,023 | 1,545 | 94 | 58.267314 | 0.937483 | 0.91911 | 0 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.244898 | false | 0 | 0.040816 | 0 | 0.530612 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 10 |
4050d3e575350a68ae0b520e5dc4a70548f89500 | 44 | py | Python | Packages/API/api_keys.py | bnonni/Python | 9ebd18caa4e2d805028b557e8b77ea65a9ee1a3d | [
"Apache-2.0"
] | 4 | 2019-10-05T03:41:20.000Z | 2020-11-04T00:39:13.000Z | Packages/API/api_keys.py | bnonni/Python | 9ebd18caa4e2d805028b557e8b77ea65a9ee1a3d | [
"Apache-2.0"
] | null | null | null | Packages/API/api_keys.py | bnonni/Python | 9ebd18caa4e2d805028b557e8b77ea65a9ee1a3d | [
"Apache-2.0"
] | 2 | 2019-10-02T14:08:51.000Z | 2019-10-03T20:49:09.000Z | api_key = "c83ad52bba836f8b33305e95f3e09da5" | 44 | 44 | 0.886364 | 3 | 44 | 12.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.452381 | 0.045455 | 44 | 1 | 44 | 44 | 0.452381 | 0 | 0 | 0 | 0 | 0 | 0.711111 | 0.711111 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4064fcf80bfc3512bdf4e06d870e31f45fff6a40 | 177 | py | Python | test/test_classifier.py | pfnet-research/autogbt-alt | 57f7ae1bce2923d11f73c3631e34be49c7dd25da | [
"MIT"
] | 83 | 2019-04-01T05:45:37.000Z | 2021-04-13T02:33:04.000Z | test/test_classifier.py | pfnet-research/autogbt-alt | 57f7ae1bce2923d11f73c3631e34be49c7dd25da | [
"MIT"
] | null | null | null | test/test_classifier.py | pfnet-research/autogbt-alt | 57f7ae1bce2923d11f73c3631e34be49c7dd25da | [
"MIT"
] | 10 | 2019-04-15T03:15:42.000Z | 2020-03-30T11:52:12.000Z | from sklearn.utils.estimator_checks import check_estimator
from autogbt.classifier import AutoGBTClassifier
def test_check_estimator():
check_estimator(AutoGBTClassifier)
| 25.285714 | 58 | 0.858757 | 20 | 177 | 7.35 | 0.6 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096045 | 177 | 6 | 59 | 29.5 | 0.91875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4077d2e121f70f92a1b4900bb21d20cc3c5a2ba2 | 5,409 | py | Python | Chapter10_DeepQNetworks/FrozenLake/plotting.py | franneck94/UdemyAI | bb3decc35ec626a09edf0abdbfbe7c36dac6179a | [
"MIT"
] | 2 | 2021-02-10T19:50:27.000Z | 2021-12-30T06:15:55.000Z | Chapter10_DeepQNetworks/FrozenLake/plotting.py | franneck94/UdemyAI | bb3decc35ec626a09edf0abdbfbe7c36dac6179a | [
"MIT"
] | 1 | 2020-12-21T15:29:20.000Z | 2022-01-15T12:06:09.000Z | Chapter10_DeepQNetworks/FrozenLake/plotting.py | franneck94/UdemyAI | bb3decc35ec626a09edf0abdbfbe7c36dac6179a | [
"MIT"
] | 4 | 2020-11-08T17:07:53.000Z | 2022-02-07T06:40:55.000Z | import matplotlib.pyplot as plt
import numpy as np
def plotting_fn(s, ax):
mat = np.full((4, 4), 1)
mat[1][3] = 0
mat[2][3] = 0
mat[3][0] = 0
mat[1][1] = 0
mat[3][3] = 2
posx = s // 4
posy = s % 4
mat[posx][posy] = 3
ax.cla()
ax.imshow(mat, cmap="Set1")
ax.text(posy, posx, "Agent", ha="center", va="center")
ax.text(3, 1, "Hole", ha="center", va="center")
ax.text(3, 2, "Hole", ha="center", va="center")
ax.text(0, 3, "Hole", ha="center", va="center")
ax.text(1, 1, "Hole", ha="center", va="center")
ax.text(3, 3, "Goal", ha="center", va="center")
plt.pause(0.3)
def save_map(values, name="test.png"):
fig, ax = plt.subplots(figsize=(10, 10))
mat = np.full((4, 4), 1)
mat[1][3] = 0
mat[2][3] = 0
mat[3][0] = 0
mat[1][1] = 0
mat[3][3] = 2
ax.cla()
ax.imshow(mat, cmap="Set1")
ax.text(3, 1, "Hole", ha="center", va="center")
ax.text(3, 2, "Hole", ha="center", va="center")
ax.text(0, 3, "Hole", ha="center", va="center")
ax.text(1, 1, "Hole", ha="center", va="center")
ax.text(3, 3, "Goal", ha="center", va="center")
for s in range(len(values)):
for a in range(len(values[s])):
posx = s // 4
posy = s % 4
max_index = np.argmax(values[s])
if a == 0: # Left
weight = "bold" if a == max_index else "normal"
ax.text(
posy - 0.2,
posx,
"L: " + str(round(values[s][a], 3)),
weight=weight,
ha="right",
va="center",
)
elif a == 1: # Down
weight = "bold" if a == max_index else "normal"
ax.text(
posy,
posx + 0.2,
"D: " + str(round(values[s][a], 3)),
weight=weight,
ha="center",
va="top",
)
elif a == 2: # Right
weight = "bold" if a == max_index else "normal"
ax.text(
posy + 0.2,
posx,
"R: " + str(round(values[s][a], 3)),
weight=weight,
ha="left",
va="center",
)
elif a == 3: # Up
weight = "bold" if a == max_index else "normal"
ax.text(
posy,
posx - 0.2,
"U: " + str(round(values[s][a], 3)),
weight=weight,
ha="center",
va="bottom",
)
fig.savefig("./" + name)
plt.close()
def plotting_q_values(state, action, values, ax):
mat = np.full((4, 4), 1)
mat[1][3] = 0
mat[2][3] = 0
mat[3][0] = 0
mat[1][1] = 0
mat[3][3] = 2
posx = state // 4
posy = state % 4
mat[posx][posy] = 3
ax.cla()
ax.imshow(mat, cmap="Set1")
ax.text(posy, posx, "Agent", ha="center", va="center")
ax.text(3, 1, "Hole", ha="center", va="center")
ax.text(3, 2, "Hole", ha="center", va="center")
ax.text(0, 3, "Hole", ha="center", va="center")
ax.text(1, 1, "Hole", ha="center", va="center")
ax.text(3, 3, "Goal", ha="center", va="center")
for s in range(len(values)):
for a in range(len(values[s])):
posx = s // 4
posy = s % 4
max_index = np.argmax(values[s])
if a == 0: # Left
weight = "bold" if a == max_index else "normal"
color = "red" if action == a and state == s else "black"
ax.text(
posy - 0.2,
posx,
"L: " + str(round(values[s][a], 3)),
weight=weight,
ha="right",
va="center",
color=color,
)
elif a == 1: # Down
weight = "bold" if a == max_index else "normal"
color = "red" if action == a and state == s else "black"
ax.text(
posy,
posx + 0.2,
"D: " + str(round(values[s][a], 3)),
weight=weight,
ha="center",
va="top",
color=color,
)
elif a == 2: # Right
weight = "bold" if a == max_index else "normal"
color = "red" if action == a and state == s else "black"
ax.text(
posy + 0.2,
posx,
"R: " + str(round(values[s][a], 3)),
weight=weight,
ha="left",
va="center",
color=color,
)
elif a == 3: # Up
weight = "bold" if a == max_index else "normal"
color = "red" if action == a and state == s else "black"
ax.text(
posy,
posx - 0.2,
"U: " + str(round(values[s][a], 3)),
weight=weight,
ha="center",
va="bottom",
color=color,
)
plt.pause(2.0)
| 32.781818 | 72 | 0.385838 | 667 | 5,409 | 3.107946 | 0.116942 | 0.072359 | 0.101302 | 0.131211 | 0.892909 | 0.892909 | 0.879884 | 0.879884 | 0.879884 | 0.868307 | 0 | 0.046221 | 0.452024 | 5,409 | 164 | 73 | 32.981707 | 0.653171 | 0.00684 | 0 | 0.870968 | 0 | 0 | 0.096215 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019355 | false | 0 | 0.012903 | 0 | 0.032258 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
40832dc25c576801f5c541980d8cee3cb73e9792 | 32,919 | py | Python | nebula2/storage/GeneralStorageService.py | Shylock-Hg/nebula-python | f17120b77adb6dd00aeb52de1abf783fcb9b4465 | [
"Apache-2.0"
] | null | null | null | nebula2/storage/GeneralStorageService.py | Shylock-Hg/nebula-python | f17120b77adb6dd00aeb52de1abf783fcb9b4465 | [
"Apache-2.0"
] | null | null | null | nebula2/storage/GeneralStorageService.py | Shylock-Hg/nebula-python | f17120b77adb6dd00aeb52de1abf783fcb9b4465 | [
"Apache-2.0"
] | null | null | null | #
# Autogenerated by Thrift
#
# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
# @generated
#
from __future__ import absolute_import
import six
import sys
from nebula2.fbthrift.util.Recursive import fix_spec
from nebula2.fbthrift.Thrift import TType, TMessageType, TPriority, TRequestContext, TProcessorEventHandler, TServerInterface, TProcessor, TException, TApplicationException, UnimplementedTypedef
from nebula2.fbthrift.protocol.TProtocol import TProtocolException
from .ttypes import UTF8STRINGS, StatType, OrderDirection, EdgeDirection, ScanType, EngineSignType, RequestCommon, PartitionResult, ResponseCommon, StatProp, Expr, EdgeProp, VertexProp, OrderBy, TraverseSpec, GetNeighborsRequest, GetNeighborsResponse, ExecResponse, GetPropRequest, GetPropResponse, NewTag, NewVertex, EdgeKey, NewEdge, AddVerticesRequest, AddEdgesRequest, DeleteVerticesRequest, DeleteEdgesRequest, DelTags, DeleteTagsRequest, UpdateResponse, UpdatedProp, UpdateVertexRequest, UpdateEdgeRequest, GetUUIDReq, GetUUIDResp, LookupIndexResp, IndexColumnHint, IndexQueryContext, IndexSpec, LookupIndexRequest, LookupAndTraverseRequest, ScanCursor, ScanVertexRequest, ScanVertexResponse, ScanEdgeRequest, ScanEdgeResponse, TaskPara, AddAdminTaskRequest, StopAdminTaskRequest, AdminExecResp, TransLeaderReq, AddPartReq, AddLearnerReq, RemovePartReq, MemberChangeReq, CatchUpDataReq, GetLeaderReq, CreateCPRequest, DropCPRequest, BlockingSignRequest, GetLeaderPartsResp, CheckPeersReq, RebuildIndexRequest, CreateCPResp, ListClusterInfoResp, ListClusterInfoReq, KVGetRequest, KVGetResponse, KVPutRequest, KVRemoveRequest, InternalTxnRequest, ChainAddEdgesRequest, ChainUpdateEdgeRequest
import nebula2.common.ttypes
import nebula2.meta.ttypes
from nebula2.fbthrift.Thrift import TProcessor
import pprint
import warnings
from nebula2.fbthrift import Thrift
from nebula2.fbthrift.transport import TTransport
from nebula2.fbthrift.protocol import TBinaryProtocol
from nebula2.fbthrift.protocol import TCompactProtocol
from nebula2.fbthrift.protocol import THeaderProtocol
fastproto = None
try:
from nebula2.fbthrift.protocol import fastproto
except ImportError:
pass
all_structs = []
UTF8STRINGS = bool(0) or sys.version_info.major >= 3
from nebula2.fbthrift.util.Decorators import (
future_process_main,
future_process_method,
process_main as thrift_process_main,
process_method as thrift_process_method,
should_run_on_thread,
write_results_after_future,
)
class Iface:
def get(self, req=None):
"""
Parameters:
- req
"""
pass
def put(self, req=None):
"""
Parameters:
- req
"""
pass
def remove(self, req=None):
"""
Parameters:
- req
"""
pass
class ContextIface:
def get(self, handler_ctx, req=None):
"""
Parameters:
- req
"""
pass
def put(self, handler_ctx, req=None):
"""
Parameters:
- req
"""
pass
def remove(self, handler_ctx, req=None):
"""
Parameters:
- req
"""
pass
# HELPER FUNCTIONS AND STRUCTURES
class get_args:
"""
Attributes:
- req
"""
thrift_spec = None
thrift_field_annotations = None
thrift_struct_annotations = None
__init__ = None
@staticmethod
def isUnion():
return False
def read(self, iprot):
if (isinstance(iprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0)
return
if (isinstance(iprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2)
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.req = KVGetRequest()
self.req.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if (isinstance(oprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0))
return
if (isinstance(oprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2))
return
oprot.writeStructBegin('get_args')
if self.req != None:
oprot.writeFieldBegin('req', TType.STRUCT, 1)
self.req.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def __repr__(self):
L = []
padding = ' ' * 4
if self.req is not None:
value = pprint.pformat(self.req, indent=0)
value = padding.join(value.splitlines(True))
L.append(' req=%s' % (value))
return "%s(%s)" % (self.__class__.__name__, "\n" + ",\n".join(L) if L else '')
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
# Override the __hash__ function for Python3 - t10434117
if not six.PY2:
__hash__ = object.__hash__
all_structs.append(get_args)
get_args.thrift_spec = (
None, # 0
(1, TType.STRUCT, 'req', [KVGetRequest, KVGetRequest.thrift_spec, False], None, 2, ), # 1
)
get_args.thrift_struct_annotations = {
}
get_args.thrift_field_annotations = {
}
def get_args__init__(self, req=None,):
self.req = req
get_args.__init__ = get_args__init__
def get_args__setstate__(self, state):
state.setdefault('req', None)
self.__dict__ = state
get_args.__getstate__ = lambda self: self.__dict__.copy()
get_args.__setstate__ = get_args__setstate__
class get_result:
"""
Attributes:
- success
"""
thrift_spec = None
thrift_field_annotations = None
thrift_struct_annotations = None
__init__ = None
@staticmethod
def isUnion():
return False
def read(self, iprot):
if (isinstance(iprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0)
return
if (isinstance(iprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2)
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRUCT:
self.success = KVGetResponse()
self.success.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if (isinstance(oprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0))
return
if (isinstance(oprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2))
return
oprot.writeStructBegin('get_result')
if self.success != None:
oprot.writeFieldBegin('success', TType.STRUCT, 0)
self.success.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def __repr__(self):
L = []
padding = ' ' * 4
if self.success is not None:
value = pprint.pformat(self.success, indent=0)
value = padding.join(value.splitlines(True))
L.append(' success=%s' % (value))
return "%s(%s)" % (self.__class__.__name__, "\n" + ",\n".join(L) if L else '')
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
# Override the __hash__ function for Python3 - t10434117
if not six.PY2:
__hash__ = object.__hash__
all_structs.append(get_result)
get_result.thrift_spec = (
(0, TType.STRUCT, 'success', [KVGetResponse, KVGetResponse.thrift_spec, False], None, 2, ), # 0
)
get_result.thrift_struct_annotations = {
}
get_result.thrift_field_annotations = {
}
def get_result__init__(self, success=None,):
self.success = success
get_result.__init__ = get_result__init__
def get_result__setstate__(self, state):
state.setdefault('success', None)
self.__dict__ = state
get_result.__getstate__ = lambda self: self.__dict__.copy()
get_result.__setstate__ = get_result__setstate__
class put_args:
"""
Attributes:
- req
"""
thrift_spec = None
thrift_field_annotations = None
thrift_struct_annotations = None
__init__ = None
@staticmethod
def isUnion():
return False
def read(self, iprot):
if (isinstance(iprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0)
return
if (isinstance(iprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2)
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.req = KVPutRequest()
self.req.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if (isinstance(oprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0))
return
if (isinstance(oprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2))
return
oprot.writeStructBegin('put_args')
if self.req != None:
oprot.writeFieldBegin('req', TType.STRUCT, 1)
self.req.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def __repr__(self):
L = []
padding = ' ' * 4
if self.req is not None:
value = pprint.pformat(self.req, indent=0)
value = padding.join(value.splitlines(True))
L.append(' req=%s' % (value))
return "%s(%s)" % (self.__class__.__name__, "\n" + ",\n".join(L) if L else '')
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
# Override the __hash__ function for Python3 - t10434117
if not six.PY2:
__hash__ = object.__hash__
all_structs.append(put_args)
put_args.thrift_spec = (
None, # 0
(1, TType.STRUCT, 'req', [KVPutRequest, KVPutRequest.thrift_spec, False], None, 2, ), # 1
)
put_args.thrift_struct_annotations = {
}
put_args.thrift_field_annotations = {
}
def put_args__init__(self, req=None,):
self.req = req
put_args.__init__ = put_args__init__
def put_args__setstate__(self, state):
state.setdefault('req', None)
self.__dict__ = state
put_args.__getstate__ = lambda self: self.__dict__.copy()
put_args.__setstate__ = put_args__setstate__
class put_result:
"""
Attributes:
- success
"""
thrift_spec = None
thrift_field_annotations = None
thrift_struct_annotations = None
__init__ = None
@staticmethod
def isUnion():
return False
def read(self, iprot):
if (isinstance(iprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0)
return
if (isinstance(iprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2)
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRUCT:
self.success = ExecResponse()
self.success.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if (isinstance(oprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0))
return
if (isinstance(oprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2))
return
oprot.writeStructBegin('put_result')
if self.success != None:
oprot.writeFieldBegin('success', TType.STRUCT, 0)
self.success.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def __repr__(self):
L = []
padding = ' ' * 4
if self.success is not None:
value = pprint.pformat(self.success, indent=0)
value = padding.join(value.splitlines(True))
L.append(' success=%s' % (value))
return "%s(%s)" % (self.__class__.__name__, "\n" + ",\n".join(L) if L else '')
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
# Override the __hash__ function for Python3 - t10434117
if not six.PY2:
__hash__ = object.__hash__
all_structs.append(put_result)
put_result.thrift_spec = (
(0, TType.STRUCT, 'success', [ExecResponse, ExecResponse.thrift_spec, False], None, 2, ), # 0
)
put_result.thrift_struct_annotations = {
}
put_result.thrift_field_annotations = {
}
def put_result__init__(self, success=None,):
self.success = success
put_result.__init__ = put_result__init__
def put_result__setstate__(self, state):
state.setdefault('success', None)
self.__dict__ = state
put_result.__getstate__ = lambda self: self.__dict__.copy()
put_result.__setstate__ = put_result__setstate__
class remove_args:
"""
Attributes:
- req
"""
thrift_spec = None
thrift_field_annotations = None
thrift_struct_annotations = None
__init__ = None
@staticmethod
def isUnion():
return False
def read(self, iprot):
if (isinstance(iprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0)
return
if (isinstance(iprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2)
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.req = KVRemoveRequest()
self.req.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if (isinstance(oprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0))
return
if (isinstance(oprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2))
return
oprot.writeStructBegin('remove_args')
if self.req != None:
oprot.writeFieldBegin('req', TType.STRUCT, 1)
self.req.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def __repr__(self):
L = []
padding = ' ' * 4
if self.req is not None:
value = pprint.pformat(self.req, indent=0)
value = padding.join(value.splitlines(True))
L.append(' req=%s' % (value))
return "%s(%s)" % (self.__class__.__name__, "\n" + ",\n".join(L) if L else '')
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
# Override the __hash__ function for Python3 - t10434117
if not six.PY2:
__hash__ = object.__hash__
all_structs.append(remove_args)
remove_args.thrift_spec = (
None, # 0
(1, TType.STRUCT, 'req', [KVRemoveRequest, KVRemoveRequest.thrift_spec, False], None, 2, ), # 1
)
remove_args.thrift_struct_annotations = {
}
remove_args.thrift_field_annotations = {
}
def remove_args__init__(self, req=None,):
self.req = req
remove_args.__init__ = remove_args__init__
def remove_args__setstate__(self, state):
state.setdefault('req', None)
self.__dict__ = state
remove_args.__getstate__ = lambda self: self.__dict__.copy()
remove_args.__setstate__ = remove_args__setstate__
class remove_result:
"""
Attributes:
- success
"""
thrift_spec = None
thrift_field_annotations = None
thrift_struct_annotations = None
__init__ = None
@staticmethod
def isUnion():
return False
def read(self, iprot):
if (isinstance(iprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0)
return
if (isinstance(iprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(iprot, THeaderProtocol.THeaderProtocolAccelerate) and iprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastproto is not None:
fastproto.decode(self, iprot.trans, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2)
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRUCT:
self.success = ExecResponse()
self.success.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if (isinstance(oprot, TBinaryProtocol.TBinaryProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_BINARY_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=0))
return
if (isinstance(oprot, TCompactProtocol.TCompactProtocolAccelerated) or (isinstance(oprot, THeaderProtocol.THeaderProtocolAccelerate) and oprot.get_protocol_id() == THeaderProtocol.THeaderProtocol.T_COMPACT_PROTOCOL)) and self.thrift_spec is not None and fastproto is not None:
oprot.trans.write(fastproto.encode(self, [self.__class__, self.thrift_spec, False], utf8strings=UTF8STRINGS, protoid=2))
return
oprot.writeStructBegin('remove_result')
if self.success != None:
oprot.writeFieldBegin('success', TType.STRUCT, 0)
self.success.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def __repr__(self):
L = []
padding = ' ' * 4
if self.success is not None:
value = pprint.pformat(self.success, indent=0)
value = padding.join(value.splitlines(True))
L.append(' success=%s' % (value))
return "%s(%s)" % (self.__class__.__name__, "\n" + ",\n".join(L) if L else '')
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
# Override the __hash__ function for Python3 - t10434117
if not six.PY2:
__hash__ = object.__hash__
all_structs.append(remove_result)
remove_result.thrift_spec = (
(0, TType.STRUCT, 'success', [ExecResponse, ExecResponse.thrift_spec, False], None, 2, ), # 0
)
remove_result.thrift_struct_annotations = {
}
remove_result.thrift_field_annotations = {
}
def remove_result__init__(self, success=None,):
self.success = success
remove_result.__init__ = remove_result__init__
def remove_result__setstate__(self, state):
state.setdefault('success', None)
self.__dict__ = state
remove_result.__getstate__ = lambda self: self.__dict__.copy()
remove_result.__setstate__ = remove_result__setstate__
class Client(Iface):
def __enter__(self):
return self
def __exit__(self, type, value, tb):
self._iprot.trans.close()
if self._iprot is not self._oprot:
self._oprot.trans.close()
def __init__(self, iprot, oprot=None):
self._iprot = self._oprot = iprot
if oprot != None:
self._oprot = oprot
self._seqid = 0
def get(self, req=None):
"""
Parameters:
- req
"""
self.send_get(req)
return self.recv_get()
def send_get(self, req=None):
self._oprot.writeMessageBegin('get', TMessageType.CALL, self._seqid)
args = get_args()
args.req = req
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_get(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = get_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success != None:
return result.success
raise TApplicationException(TApplicationException.MISSING_RESULT, "get failed: unknown result");
def put(self, req=None):
"""
Parameters:
- req
"""
self.send_put(req)
return self.recv_put()
def send_put(self, req=None):
self._oprot.writeMessageBegin('put', TMessageType.CALL, self._seqid)
args = put_args()
args.req = req
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_put(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = put_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success != None:
return result.success
raise TApplicationException(TApplicationException.MISSING_RESULT, "put failed: unknown result");
def remove(self, req=None):
"""
Parameters:
- req
"""
self.send_remove(req)
return self.recv_remove()
def send_remove(self, req=None):
self._oprot.writeMessageBegin('remove', TMessageType.CALL, self._seqid)
args = remove_args()
args.req = req
args.write(self._oprot)
self._oprot.writeMessageEnd()
self._oprot.trans.flush()
def recv_remove(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(self._iprot)
self._iprot.readMessageEnd()
raise x
result = remove_result()
result.read(self._iprot)
self._iprot.readMessageEnd()
if result.success != None:
return result.success
raise TApplicationException(TApplicationException.MISSING_RESULT, "remove failed: unknown result");
class Processor(Iface, TProcessor):
_onewayMethods = ()
def __init__(self, handler):
TProcessor.__init__(self)
self._handler = handler
self._processMap = {}
self._priorityMap = {}
self._processMap["get"] = Processor.process_get
self._priorityMap["get"] = TPriority.NORMAL
self._processMap["put"] = Processor.process_put
self._priorityMap["put"] = TPriority.NORMAL
self._processMap["remove"] = Processor.process_remove
self._priorityMap["remove"] = TPriority.NORMAL
def onewayMethods(self):
l = []
l.extend(Processor._onewayMethods)
return tuple(l)
@thrift_process_main()
def process(self,): pass
@thrift_process_method(get_args, oneway=False)
def process_get(self, args, handler_ctx):
result = get_result()
try:
result.success = self._handler.get(args.req)
except:
ex = sys.exc_info()[1]
self._event_handler.handlerError(handler_ctx, 'get', ex)
result = Thrift.TApplicationException(message=repr(ex))
return result
@thrift_process_method(put_args, oneway=False)
def process_put(self, args, handler_ctx):
result = put_result()
try:
result.success = self._handler.put(args.req)
except:
ex = sys.exc_info()[1]
self._event_handler.handlerError(handler_ctx, 'put', ex)
result = Thrift.TApplicationException(message=repr(ex))
return result
@thrift_process_method(remove_args, oneway=False)
def process_remove(self, args, handler_ctx):
result = remove_result()
try:
result.success = self._handler.remove(args.req)
except:
ex = sys.exc_info()[1]
self._event_handler.handlerError(handler_ctx, 'remove', ex)
result = Thrift.TApplicationException(message=repr(ex))
return result
Iface._processor_type = Processor
class ContextProcessor(ContextIface, TProcessor):
_onewayMethods = ()
def __init__(self, handler):
TProcessor.__init__(self)
self._handler = handler
self._processMap = {}
self._priorityMap = {}
self._processMap["get"] = ContextProcessor.process_get
self._priorityMap["get"] = TPriority.NORMAL
self._processMap["put"] = ContextProcessor.process_put
self._priorityMap["put"] = TPriority.NORMAL
self._processMap["remove"] = ContextProcessor.process_remove
self._priorityMap["remove"] = TPriority.NORMAL
def onewayMethods(self):
l = []
l.extend(ContextProcessor._onewayMethods)
return tuple(l)
@thrift_process_main()
def process(self,): pass
@thrift_process_method(get_args, oneway=False)
def process_get(self, args, handler_ctx):
result = get_result()
try:
result.success = self._handler.get(handler_ctx, args.req)
except:
ex = sys.exc_info()[1]
self._event_handler.handlerError(handler_ctx, 'get', ex)
result = Thrift.TApplicationException(message=repr(ex))
return result
@thrift_process_method(put_args, oneway=False)
def process_put(self, args, handler_ctx):
result = put_result()
try:
result.success = self._handler.put(handler_ctx, args.req)
except:
ex = sys.exc_info()[1]
self._event_handler.handlerError(handler_ctx, 'put', ex)
result = Thrift.TApplicationException(message=repr(ex))
return result
@thrift_process_method(remove_args, oneway=False)
def process_remove(self, args, handler_ctx):
result = remove_result()
try:
result.success = self._handler.remove(handler_ctx, args.req)
except:
ex = sys.exc_info()[1]
self._event_handler.handlerError(handler_ctx, 'remove', ex)
result = Thrift.TApplicationException(message=repr(ex))
return result
ContextIface._processor_type = ContextProcessor
fix_spec(all_structs)
del all_structs
| 36.658129 | 1,195 | 0.725022 | 3,829 | 32,919 | 5.94176 | 0.074954 | 0.02901 | 0.021362 | 0.029537 | 0.846292 | 0.82392 | 0.815173 | 0.796888 | 0.782207 | 0.77412 | 0 | 0.007344 | 0.168535 | 32,919 | 897 | 1,196 | 36.698997 | 0.823865 | 0.023634 | 0 | 0.738372 | 1 | 0 | 0.014403 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113372 | false | 0.013081 | 0.02907 | 0.018895 | 0.296512 | 0.010174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
40997f044d20b8ac259e05743f402b1bff691f5e | 1,819 | py | Python | core/migrations/0024_auto_20210606_1702.py | tanyutao544/digitalace-backend | 3607b1325856eafa4e1c96d6189f7aed1b163a19 | [
"MIT"
] | 1 | 2021-05-28T05:22:54.000Z | 2021-05-28T05:22:54.000Z | core/migrations/0024_auto_20210606_1702.py | tanyutao544/digitalace-backend | 3607b1325856eafa4e1c96d6189f7aed1b163a19 | [
"MIT"
] | 3 | 2021-05-31T15:44:14.000Z | 2021-06-29T07:48:13.000Z | core/migrations/0024_auto_20210606_1702.py | tanyutao544/digitalace-backend | 3607b1325856eafa4e1c96d6189f7aed1b163a19 | [
"MIT"
] | 1 | 2021-05-30T07:42:54.000Z | 2021-05-30T07:42:54.000Z | # Generated by Django 3.2.3 on 2021-06-06 17:02
from decimal import Decimal
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0023_auto_20210528_0911'),
]
operations = [
migrations.AddField(
model_name='customer',
name='email',
field=models.EmailField(default='default@digitalace.com', max_length=255, unique=True),
preserve_default=False,
),
migrations.AddField(
model_name='customer',
name='first_seen',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='customer',
name='last_seen',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='customer',
name='receivables',
field=models.DecimalField(blank=True, decimal_places=2, default=Decimal('0.00'), max_digits=10),
),
migrations.AddField(
model_name='supplier',
name='email',
field=models.EmailField(default='default@digitalace.com', max_length=255, unique=True),
preserve_default=False,
),
migrations.AddField(
model_name='supplier',
name='first_seen',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='supplier',
name='last_seen',
field=models.DateField(blank=True, null=True),
),
migrations.AddField(
model_name='supplier',
name='payables',
field=models.DecimalField(blank=True, decimal_places=2, default=Decimal('0.00'), max_digits=10),
),
]
| 31.912281 | 108 | 0.574491 | 181 | 1,819 | 5.646409 | 0.314917 | 0.1409 | 0.180039 | 0.21135 | 0.81409 | 0.81409 | 0.729941 | 0.729941 | 0.729941 | 0.729941 | 0 | 0.038735 | 0.304563 | 1,819 | 56 | 109 | 32.482143 | 0.76917 | 0.024739 | 0 | 0.8 | 1 | 0 | 0.11851 | 0.03781 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
dc0fd722f15a563b6a4dce90ad33d3be37fee9cd | 12,128 | py | Python | vendor/github.com/containers-ai/api/alameda_api/v1alpha1/datahub/gpu/gpu_pb2.py | YungTsun/alameda | 518925fa6455ad2db812d6be08ed94741452c45c | [
"Apache-2.0"
] | null | null | null | vendor/github.com/containers-ai/api/alameda_api/v1alpha1/datahub/gpu/gpu_pb2.py | YungTsun/alameda | 518925fa6455ad2db812d6be08ed94741452c45c | [
"Apache-2.0"
] | null | null | null | vendor/github.com/containers-ai/api/alameda_api/v1alpha1/datahub/gpu/gpu_pb2.py | YungTsun/alameda | 518925fa6455ad2db812d6be08ed94741452c45c | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: alameda_api/v1alpha1/datahub/gpu/gpu.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from alameda_api.v1alpha1.datahub.common import metrics_pb2 as alameda__api_dot_v1alpha1_dot_datahub_dot_common_dot_metrics__pb2
from alameda_api.v1alpha1.datahub.gpu import types_pb2 as alameda__api_dot_v1alpha1_dot_datahub_dot_gpu_dot_types__pb2
from alameda_api.v1alpha1.datahub.predictions import types_pb2 as alameda__api_dot_v1alpha1_dot_datahub_dot_predictions_dot_types__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='alameda_api/v1alpha1/datahub/gpu/gpu.proto',
package='containersai.alameda.v1alpha1.datahub.gpu',
syntax='proto3',
serialized_options=_b('Z=github.com/containers-ai/api/alameda_api/v1alpha1/datahub/gpu'),
serialized_pb=_b('\n*alameda_api/v1alpha1/datahub/gpu/gpu.proto\x12)containersai.alameda.v1alpha1.datahub.gpu\x1a\x31\x61lameda_api/v1alpha1/datahub/common/metrics.proto\x1a,alameda_api/v1alpha1/datahub/gpu/types.proto\x1a\x34\x61lameda_api/v1alpha1/datahub/predictions/types.proto\"\xad\x01\n\x03Gpu\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04uuid\x18\x02 \x01(\t\x12H\n\x08metadata\x18\x03 \x01(\x0b\x32\x36.containersai.alameda.v1alpha1.datahub.gpu.GpuMetadata\x12@\n\x04spec\x18\x04 \x01(\x0b\x32\x32.containersai.alameda.v1alpha1.datahub.gpu.GpuSpec\"\xc0\x01\n\tGpuMetric\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04uuid\x18\x02 \x01(\t\x12H\n\x08metadata\x18\x03 \x01(\x0b\x32\x36.containersai.alameda.v1alpha1.datahub.gpu.GpuMetadata\x12M\n\x0bmetric_data\x18\x04 \x03(\x0b\x32\x38.containersai.alameda.v1alpha1.datahub.common.MetricData\"\x94\x03\n\rGpuPrediction\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04uuid\x18\x02 \x01(\t\x12H\n\x08metadata\x18\x03 \x01(\x0b\x32\x36.containersai.alameda.v1alpha1.datahub.gpu.GpuMetadata\x12Y\n\x12predicted_raw_data\x18\x04 \x03(\x0b\x32=.containersai.alameda.v1alpha1.datahub.predictions.MetricData\x12`\n\x19predicted_upperbound_data\x18\x05 \x03(\x0b\x32=.containersai.alameda.v1alpha1.datahub.predictions.MetricData\x12`\n\x19predicted_lowerbound_data\x18\x06 \x03(\x0b\x32=.containersai.alameda.v1alpha1.datahub.predictions.MetricDataB?Z=github.com/containers-ai/api/alameda_api/v1alpha1/datahub/gpub\x06proto3')
,
dependencies=[alameda__api_dot_v1alpha1_dot_datahub_dot_common_dot_metrics__pb2.DESCRIPTOR,alameda__api_dot_v1alpha1_dot_datahub_dot_gpu_dot_types__pb2.DESCRIPTOR,alameda__api_dot_v1alpha1_dot_datahub_dot_predictions_dot_types__pb2.DESCRIPTOR,])
_GPU = _descriptor.Descriptor(
name='Gpu',
full_name='containersai.alameda.v1alpha1.datahub.gpu.Gpu',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='containersai.alameda.v1alpha1.datahub.gpu.Gpu.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='uuid', full_name='containersai.alameda.v1alpha1.datahub.gpu.Gpu.uuid', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='metadata', full_name='containersai.alameda.v1alpha1.datahub.gpu.Gpu.metadata', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='spec', full_name='containersai.alameda.v1alpha1.datahub.gpu.Gpu.spec', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=241,
serialized_end=414,
)
_GPUMETRIC = _descriptor.Descriptor(
name='GpuMetric',
full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuMetric',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuMetric.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='uuid', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuMetric.uuid', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='metadata', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuMetric.metadata', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='metric_data', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuMetric.metric_data', index=3,
number=4, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=417,
serialized_end=609,
)
_GPUPREDICTION = _descriptor.Descriptor(
name='GpuPrediction',
full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuPrediction',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuPrediction.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='uuid', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuPrediction.uuid', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='metadata', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuPrediction.metadata', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='predicted_raw_data', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuPrediction.predicted_raw_data', index=3,
number=4, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='predicted_upperbound_data', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuPrediction.predicted_upperbound_data', index=4,
number=5, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='predicted_lowerbound_data', full_name='containersai.alameda.v1alpha1.datahub.gpu.GpuPrediction.predicted_lowerbound_data', index=5,
number=6, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=612,
serialized_end=1016,
)
_GPU.fields_by_name['metadata'].message_type = alameda__api_dot_v1alpha1_dot_datahub_dot_gpu_dot_types__pb2._GPUMETADATA
_GPU.fields_by_name['spec'].message_type = alameda__api_dot_v1alpha1_dot_datahub_dot_gpu_dot_types__pb2._GPUSPEC
_GPUMETRIC.fields_by_name['metadata'].message_type = alameda__api_dot_v1alpha1_dot_datahub_dot_gpu_dot_types__pb2._GPUMETADATA
_GPUMETRIC.fields_by_name['metric_data'].message_type = alameda__api_dot_v1alpha1_dot_datahub_dot_common_dot_metrics__pb2._METRICDATA
_GPUPREDICTION.fields_by_name['metadata'].message_type = alameda__api_dot_v1alpha1_dot_datahub_dot_gpu_dot_types__pb2._GPUMETADATA
_GPUPREDICTION.fields_by_name['predicted_raw_data'].message_type = alameda__api_dot_v1alpha1_dot_datahub_dot_predictions_dot_types__pb2._METRICDATA
_GPUPREDICTION.fields_by_name['predicted_upperbound_data'].message_type = alameda__api_dot_v1alpha1_dot_datahub_dot_predictions_dot_types__pb2._METRICDATA
_GPUPREDICTION.fields_by_name['predicted_lowerbound_data'].message_type = alameda__api_dot_v1alpha1_dot_datahub_dot_predictions_dot_types__pb2._METRICDATA
DESCRIPTOR.message_types_by_name['Gpu'] = _GPU
DESCRIPTOR.message_types_by_name['GpuMetric'] = _GPUMETRIC
DESCRIPTOR.message_types_by_name['GpuPrediction'] = _GPUPREDICTION
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Gpu = _reflection.GeneratedProtocolMessageType('Gpu', (_message.Message,), {
'DESCRIPTOR' : _GPU,
'__module__' : 'alameda_api.v1alpha1.datahub.gpu.gpu_pb2'
# @@protoc_insertion_point(class_scope:containersai.alameda.v1alpha1.datahub.gpu.Gpu)
})
_sym_db.RegisterMessage(Gpu)
GpuMetric = _reflection.GeneratedProtocolMessageType('GpuMetric', (_message.Message,), {
'DESCRIPTOR' : _GPUMETRIC,
'__module__' : 'alameda_api.v1alpha1.datahub.gpu.gpu_pb2'
# @@protoc_insertion_point(class_scope:containersai.alameda.v1alpha1.datahub.gpu.GpuMetric)
})
_sym_db.RegisterMessage(GpuMetric)
GpuPrediction = _reflection.GeneratedProtocolMessageType('GpuPrediction', (_message.Message,), {
'DESCRIPTOR' : _GPUPREDICTION,
'__module__' : 'alameda_api.v1alpha1.datahub.gpu.gpu_pb2'
# @@protoc_insertion_point(class_scope:containersai.alameda.v1alpha1.datahub.gpu.GpuPrediction)
})
_sym_db.RegisterMessage(GpuPrediction)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| 50.74477 | 1,479 | 0.778776 | 1,605 | 12,128 | 5.550779 | 0.10405 | 0.040409 | 0.070715 | 0.114491 | 0.810416 | 0.777304 | 0.762151 | 0.750028 | 0.705691 | 0.702885 | 0 | 0.042305 | 0.101501 | 12,128 | 238 | 1,480 | 50.957983 | 0.775259 | 0.038753 | 0 | 0.647619 | 1 | 0.004762 | 0.271783 | 0.241995 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038095 | 0 | 0.038095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
90aa47bf033bd2e514a36e324f9fd4e3f754d4d1 | 3,764 | py | Python | tests/test_keep_k.py | cxan96/oblivious-sketching-logreg | 6eb3500800ba7e31e113a83c9c84739227793b65 | [
"MIT"
] | 4 | 2021-06-09T15:32:18.000Z | 2022-02-03T19:40:21.000Z | tests/test_keep_k.py | cxan96/oblivious-sketching-logreg | 6eb3500800ba7e31e113a83c9c84739227793b65 | [
"MIT"
] | null | null | null | tests/test_keep_k.py | cxan96/oblivious-sketching-logreg | 6eb3500800ba7e31e113a83c9c84739227793b65 | [
"MIT"
] | null | null | null | import numpy as np
from sketching import optimizer
def test_keep_k_simple():
test_vec = np.array([5, 3, -10, 6, 7, 2, 8, 6, 9])
test_block_size = 3
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=1, biggest=True
)
assert np.allclose(
results,
np.array([5, 7, 9]),
)
assert np.allclose(indices, np.array([0, 4, 8]))
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=2, biggest=True
)
assert np.allclose(results, np.array([5, 3, 7, 6, 9, 8]))
assert np.allclose(indices, np.array([0, 1, 4, 3, 8, 6]))
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=3, biggest=True
)
assert np.allclose(results, np.array([5, 3, -10, 6, 7, 2, 8, 6, 9]))
assert np.allclose(indices, np.array([0, 1, 2, 3, 4, 5, 6, 7, 8]))
# test keep smallest
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=1, biggest=False
)
assert np.allclose(
results,
np.array([-10, 2, 6]),
)
assert np.allclose(indices, np.array([2, 5, 7]))
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=2, biggest=False
)
assert np.allclose(results, np.array([-10, 3, 2, 6, 6, 8]))
assert np.allclose(indices, np.array([2, 1, 5, 3, 7, 6]))
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=3, biggest=False
)
assert np.allclose(results, np.array([5, 3, -10, 6, 7, 2, 8, 6, 9]))
assert np.allclose(indices, np.array([0, 1, 2, 3, 4, 5, 6, 7, 8]))
def test_keep_k_do_not_touch():
"""
Do not touch means: Don't touch the last elements of the vector.
Here in the test: 1, 2, 3, 4, 5 are the last elements.
"""
test_vec = np.array([5, 3, -10, 6, 7, 2, 8, 6, 9, 1, 2, 3, 4, 5])
test_block_size = 3
test_max_len = 9
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=1, max_len=test_max_len, biggest=True
)
assert np.allclose(
results,
np.array([5, 7, 9, 1, 2, 3, 4, 5]),
)
assert np.allclose(indices, np.array([0, 4, 8, 9, 10, 11, 12, 13]))
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=2, max_len=test_max_len, biggest=True
)
assert np.allclose(results, np.array([5, 3, 7, 6, 9, 8, 1, 2, 3, 4, 5]))
assert np.allclose(indices, np.array([0, 1, 4, 3, 8, 6, 9, 10, 11, 12, 13]))
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=3, max_len=test_max_len, biggest=True
)
assert np.allclose(results, np.array([5, 3, -10, 6, 7, 2, 8, 6, 9, 1, 2, 3, 4, 5]))
assert np.allclose(
indices, np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])
)
# test keep smallest
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=1, max_len=test_max_len, biggest=False
)
assert np.allclose(
results,
np.array([-10, 2, 6, 1, 2, 3, 4, 5]),
)
assert np.allclose(indices, np.array([2, 5, 7, 9, 10, 11, 12, 13]))
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=2, max_len=test_max_len, biggest=False
)
assert np.allclose(results, np.array([-10, 3, 2, 6, 6, 8, 1, 2, 3, 4, 5]))
assert np.allclose(indices, np.array([2, 1, 5, 3, 7, 6, 9, 10, 11, 12, 13]))
results, indices = optimizer.only_keep_k(
test_vec, test_block_size, k=3, max_len=test_max_len, biggest=False
)
assert np.allclose(results, np.array([5, 3, -10, 6, 7, 2, 8, 6, 9, 1, 2, 3, 4, 5]))
assert np.allclose(
indices, np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])
)
| 28.300752 | 87 | 0.591126 | 653 | 3,764 | 3.2634 | 0.084227 | 0.085406 | 0.180197 | 0.152041 | 0.90474 | 0.902393 | 0.902393 | 0.902393 | 0.902393 | 0.869545 | 0 | 0.099434 | 0.249203 | 3,764 | 132 | 88 | 28.515152 | 0.654636 | 0.041977 | 0 | 0.376471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.282353 | 1 | 0.023529 | false | 0 | 0.023529 | 0 | 0.047059 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
90e82c2959a87b4c347e7943c09d2ecc4548f66f | 199 | py | Python | Workshops/25.11.2017 - Python. The Basics - Volume 1 -/Python Codes/5/3_super_wow.py | Act862/PyLam-Edu | c53114a56da8e5803495acc3c82da1b36ddd452b | [
"MIT"
] | null | null | null | Workshops/25.11.2017 - Python. The Basics - Volume 1 -/Python Codes/5/3_super_wow.py | Act862/PyLam-Edu | c53114a56da8e5803495acc3c82da1b36ddd452b | [
"MIT"
] | 3 | 2018-11-30T08:30:49.000Z | 2018-11-30T08:38:02.000Z | Workshops/25.11.2017 - Python. The Basics - Volume 1 -/Python Codes/5/3_super_wow.py | Act862/PyLam-Edu | c53114a56da8e5803495acc3c82da1b36ddd452b | [
"MIT"
] | 2 | 2018-07-01T14:11:47.000Z | 2018-11-04T01:06:55.000Z | i = 1
print("Value of i = ", i)
i += 5
print("Value of i = ", i)
i -= 3
print("Value of i = ", i)
i *= 2
print("Value of i = ", i)
i **= 2
print("Value of i = ", i)
i //= 2
print("Value of i = ", i)
| 15.307692 | 25 | 0.482412 | 42 | 199 | 2.285714 | 0.190476 | 0.229167 | 0.75 | 0.8125 | 0.958333 | 0.958333 | 0.645833 | 0.645833 | 0.645833 | 0.645833 | 0 | 0.041379 | 0.271357 | 199 | 12 | 26 | 16.583333 | 0.62069 | 0 | 0 | 0.5 | 0 | 0 | 0.39196 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
2944d89760f75e10b4196b8b0e956fc3064a181f | 6,051 | py | Python | test/dataset_transforms_mixup_test.py | propelwise/ClassyVision | 2d4a0aac3a969fd1219e83a6acafd57618fcbac4 | [
"MIT"
] | null | null | null | test/dataset_transforms_mixup_test.py | propelwise/ClassyVision | 2d4a0aac3a969fd1219e83a6acafd57618fcbac4 | [
"MIT"
] | null | null | null | test/dataset_transforms_mixup_test.py | propelwise/ClassyVision | 2d4a0aac3a969fd1219e83a6acafd57618fcbac4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import unittest
import torch
from classy_vision.dataset.transforms.mixup import MixupTransform
class DatasetTransformsMixupTest(unittest.TestCase):
def test_mixup_transform_single_label_image_batch(self):
mixup_alpha = 2.0
num_classes = 3
for mode in ["batch", "pair", "elem"]:
mixup_transform = MixupTransform(mixup_alpha, num_classes, mode=mode)
sample = {
"input": torch.rand(4, 3, 224, 224, dtype=torch.float32),
"target": torch.as_tensor([0, 1, 2, 2], dtype=torch.int32),
}
sample_mixup = mixup_transform(sample)
self.assertTrue(sample["input"].shape == sample_mixup["input"].shape)
self.assertTrue(sample_mixup["target"].shape[0] == 4)
self.assertTrue(sample_mixup["target"].shape[1] == 3)
def test_cutmix_transform_single_label_image_batch(self):
mixup_alpha = 0
cutmix_alpha = 0.2
num_classes = 3
for mode in ["batch", "pair", "elem"]:
for minmax in [None, (0.3, 0.7)]:
cutmix_transform = MixupTransform(
mixup_alpha,
num_classes,
cutmix_alpha=cutmix_alpha,
mode=mode,
cutmix_minmax=minmax,
)
sample = {
"input": torch.rand(4, 3, 224, 224, dtype=torch.float32),
"target": torch.as_tensor([0, 1, 2, 2], dtype=torch.int32),
}
sample_cutmix = cutmix_transform(sample)
self.assertTrue(sample["input"].shape == sample_cutmix["input"].shape)
self.assertTrue(sample_cutmix["target"].shape[0] == 4)
self.assertTrue(sample_cutmix["target"].shape[1] == 3)
def test_mixup_cutmix_transform_single_label_image_batch(self):
mixup_alpha = 0.3
cutmix_alpha = 0.2
num_classes = 3
for mode in ["batch", "pair", "elem"]:
cutmix_transform = MixupTransform(
mixup_alpha,
num_classes,
cutmix_alpha=cutmix_alpha,
switch_prob=0.5,
mode=mode,
)
for _i in range(4):
sample = {
"input": torch.rand(4, 3, 224, 224, dtype=torch.float32),
"target": torch.as_tensor([0, 1, 2, 2], dtype=torch.int32),
}
sample_cutmix = cutmix_transform(sample)
self.assertTrue(sample["input"].shape == sample_cutmix["input"].shape)
self.assertTrue(sample_cutmix["target"].shape[0] == 4)
self.assertTrue(sample_cutmix["target"].shape[1] == 3)
def test_mixup_cutmix_transform_single_label_image_batch_label_smooth(self):
mixup_alpha = 0.3
cutmix_alpha = 0.2
num_classes = 3
for mode in ["batch", "pair", "elem"]:
cutmix_transform = MixupTransform(
mixup_alpha,
num_classes,
cutmix_alpha=cutmix_alpha,
switch_prob=0.5,
mode=mode,
label_smoothing=0.1,
)
for _i in range(4):
sample = {
"input": torch.rand(4, 3, 224, 224, dtype=torch.float32),
"target": torch.as_tensor([0, 1, 2, 2], dtype=torch.int32),
}
sample_cutmix = cutmix_transform(sample)
self.assertTrue(sample["input"].shape == sample_cutmix["input"].shape)
self.assertTrue(sample_cutmix["target"].shape[0] == 4)
self.assertTrue(sample_cutmix["target"].shape[1] == 3)
def test_mixup_transform_single_label_image_batch_missing_num_classes(self):
mixup_alpha = 2.0
mixup_transform = MixupTransform(mixup_alpha, None)
sample = {
"input": torch.rand(4, 3, 224, 224, dtype=torch.float32),
"target": torch.as_tensor([0, 1, 2, 2], dtype=torch.int32),
}
with self.assertRaises(Exception):
mixup_transform(sample)
def test_mixup_transform_multi_label_image_batch(self):
mixup_alpha = 2.0
mixup_transform = MixupTransform(mixup_alpha, None)
sample = {
"input": torch.rand(4, 3, 224, 224, dtype=torch.float32),
"target": torch.as_tensor(
[[1, 0, 0, 0], [0, 1, 0, 1], [0, 0, 1, 1], [0, 1, 1, 1]],
dtype=torch.int32,
),
}
sample_mixup = mixup_transform(sample)
self.assertTrue(sample["input"].shape == sample_mixup["input"].shape)
self.assertTrue(sample["target"].shape == sample_mixup["target"].shape)
def test_mixup_transform_single_label_multi_modal_batch(self):
mixup_alpha = 2.0
num_classes = 3
mixup_transform = MixupTransform(mixup_alpha, num_classes)
sample = {
"input": {
"video": torch.rand(4, 3, 4, 224, 224, dtype=torch.float32),
"audio": torch.rand(4, 1, 40, 100, dtype=torch.float32),
},
"target": torch.as_tensor([0, 1, 2, 2], dtype=torch.int32),
}
mixup_transform(sample)
def test_mixup_transform_multi_label_multi_modal_batch(self):
mixup_alpha = 2.0
mixup_transform = MixupTransform(mixup_alpha, None)
sample = {
"input": {
"video": torch.rand(4, 3, 4, 224, 224, dtype=torch.float32),
"audio": torch.rand(4, 1, 40, 100, dtype=torch.float32),
},
"target": torch.as_tensor(
[[1, 0, 0, 0], [0, 1, 0, 1], [0, 0, 1, 1], [0, 1, 1, 1]],
dtype=torch.int32,
),
}
mixup_transform(sample)
| 39.54902 | 86 | 0.550488 | 698 | 6,051 | 4.568768 | 0.133238 | 0.056444 | 0.087802 | 0.082785 | 0.862339 | 0.862339 | 0.838194 | 0.797742 | 0.775165 | 0.708373 | 0 | 0.05587 | 0.328541 | 6,051 | 152 | 87 | 39.809211 | 0.729018 | 0.0314 | 0 | 0.728682 | 0 | 0 | 0.046107 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 1 | 0.062016 | false | 0 | 0.023256 | 0 | 0.093023 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
29732443930d45130c6cf4d33b1acc4875b582f9 | 13,140 | py | Python | databayes/tests/ohlcv/test_ohlcv.py | alphabayes/databayes | 310622e2ecc66fb2b046e7e539eeecd4d9a9f528 | [
"MIT"
] | null | null | null | databayes/tests/ohlcv/test_ohlcv.py | alphabayes/databayes | 310622e2ecc66fb2b046e7e539eeecd4d9a9f528 | [
"MIT"
] | null | null | null | databayes/tests/ohlcv/test_ohlcv.py | alphabayes/databayes | 310622e2ecc66fb2b046e7e539eeecd4d9a9f528 | [
"MIT"
] | null | null | null | import databayes.ohlcv as ohlcv
import yaml
import json
import numpy as np
import pandas as pd
import logging
import pytest
import os
import pkg_resources
import copy
installed_pkg = {pkg.key for pkg in pkg_resources.working_set}
if 'ipdb' in installed_pkg:
import ipdb
logger = logging.getLogger()
# DATA_DIR = "data"
# Util function
# def ppcpt(c): print(gum_pp.cpt2txt(c))
DATA_PATH = os.path.join(os.path.dirname(__file__), "data")
EXPECTED_PATH = os.path.join(os.path.dirname(__file__), "expected")
@pytest.fixture(scope="module")
def data_btc_usdc_1m_100():
data_filename = os.path.join(DATA_PATH, "data_btc_usdc_1m_100.csv")
data_ohlcv_df = pd.read_csv(data_filename, sep=";",
index_col="timestamp",
parse_dates=["datetime"])
return data_ohlcv_df
@pytest.fixture(scope="module")
def data_edf_1d_1mo():
data_filename = os.path.join(DATA_PATH, "data_edf_1d_1mo.csv")
data_ohlcv_df = pd.read_csv(data_filename, sep=",",
index_col="timestamp",
parse_dates=["timestamp"])
return data_ohlcv_df
def test_ohlcv_add_data_001(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
analyser_ref = ohlcv.ohlcvDataAnalyser()
analyser_ref.add_ohlcv_data(data_ohlcv_df)
data_ohlcv_p1_df = data_ohlcv_df.iloc[:50]
data_ohlcv_p2_df = data_ohlcv_df.iloc[50:]
analyser_1 = ohlcv.ohlcvDataAnalyser()
analyser_1.add_ohlcv_data(data_ohlcv_p1_df)
analyser_1.add_ohlcv_data(data_ohlcv_p2_df)
pd.testing.assert_frame_equal(
analyser_ref.ohlcv_df,
analyser_1.ohlcv_df)
def test_ohlcv_target_001(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
ohlcv_specs = yaml.load("""
target_time_horizon: [1, 10]
""", Loader=yaml.SafeLoader)
analyser_ref = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_targets()
# TODO: Check targets computations and test values
# data_ohlcv_p1_df = data_ohlcv_df.iloc[:50]
# data_ohlcv_p2_df = data_ohlcv_df.iloc[50:]
# analyser_1 = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
# analyser_1.add_ohlcv_data(data_ohlcv_p1_df)
# analyser_1.compute_targets()
# analyser_1.add_ohlcv_data(data_ohlcv_p2_df)
# analyser_1.compute_targets()
# pd.testing.assert_frame_equal(
# analyser_ref.ohlcv_df,
# analyser_1.ohlcv_df)
# pd.testing.assert_frame_equal(
# analyser_ref.target_df,
# analyser_1.target_df)
def test_ohlcv_target_002(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
ohlcv_specs = yaml.load("""
target_time_horizon: [1, 10]
""", Loader=yaml.SafeLoader)
analyser_ref = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_targets()
data_ohlcv_p1_df = data_ohlcv_df.iloc[:50]
data_ohlcv_p2_df = data_ohlcv_df.iloc[50:]
analyser_1 = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_1.add_ohlcv_data(data_ohlcv_p1_df)
analyser_1.compute_targets()
analyser_1.add_ohlcv_data(data_ohlcv_p2_df)
analyser_1.compute_targets()
pd.testing.assert_frame_equal(
analyser_ref.ohlcv_df,
analyser_1.ohlcv_df)
pd.testing.assert_frame_equal(
analyser_ref.target_df,
analyser_1.target_df)
def test_ohlcv_target_003(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
ohlcv_specs = yaml.load("""
target_time_horizon: [1, 10]
""", Loader=yaml.SafeLoader)
analyser_ref = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_targets()
data_ohlcv_p1_df = data_ohlcv_df.iloc[:50]
data_ohlcv_p2_df = data_ohlcv_df.iloc[50:]
analyser_1 = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_1.add_ohlcv_data(data_ohlcv_p1_df)
analyser_1.compute_targets()
analyser_1.add_ohlcv_data(data_ohlcv_p2_df)
analyser_1.update_targets()
pd.testing.assert_frame_equal(
analyser_ref.ohlcv_df,
analyser_1.ohlcv_df)
pd.testing.assert_frame_equal(
analyser_ref.target_df,
analyser_1.target_df)
def test_ohlcv_target_004(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
ohlcv_specs = yaml.load("""
target_time_horizon: [1, 10]
""", Loader=yaml.SafeLoader)
analyser_ref = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_targets()
analyser_1 = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_1.add_ohlcv_data(data_ohlcv_df)
analyser_1.update_targets()
pd.testing.assert_frame_equal(
analyser_ref.ohlcv_df,
analyser_1.ohlcv_df)
pd.testing.assert_frame_equal(
analyser_ref.target_df,
analyser_1.target_df)
def test_ohlcv_target_005(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
ohlcv_specs = yaml.load("""
target_time_horizon: [1, 10]
""", Loader=yaml.SafeLoader)
analyser_ref = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_targets()
analyser_1 = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
for ts in data_ohlcv_df.index:
analyser_1.add_ohlcv_data(data_ohlcv_df.loc[ts:ts])
analyser_1.update_targets()
pd.testing.assert_frame_equal(
analyser_ref.ohlcv_df,
analyser_1.ohlcv_df)
pd.testing.assert_frame_equal(
analyser_ref.target_df,
analyser_1.target_df)
def test_ohlcv_perf_001(data_edf_1d_1mo):
data_ohlcv_df = data_edf_1d_1mo.copy()
ohlcv_specs = dict(
perf_time_horizon=[0, 1, 5],
control_regular_ts_delta=False,
ohlcv_names={"open": "Open",
"close": "Close",
"low": "Low",
"high": "High",
"volume": "Volume"},
)
analyser_ref = ohlcv.ohlcvDataAnalyser(**ohlcv_specs)
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_perf()
data_expected_filename = os.path.join(EXPECTED_PATH, "ohlcv_perf_001.csv")
analyser_ref.perf_df.to_csv(data_expected_filename, sep=",")
ohlcv_perf_expected_df = \
pd.read_csv(data_expected_filename,
sep=",",
index_col="timestamp",
parse_dates=["timestamp"])
pd.testing.assert_frame_equal(
analyser_ref.perf_df,
ohlcv_perf_expected_df)
analyser_1 = ohlcv.ohlcvDataAnalyser(**ohlcv_specs)
for ts in data_ohlcv_df.index:
analyser_1.add_ohlcv_data(data_ohlcv_df.loc[ts:ts])
analyser_1.update_perf()
pd.testing.assert_frame_equal(
analyser_ref.ohlcv_df,
analyser_1.ohlcv_df)
pd.testing.assert_frame_equal(
analyser_ref.perf_df,
analyser_1.perf_df)
def test_ohlcv_indic_001(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
ohlcv_specs = yaml.load("""
indicators:
hammer_t:
cls: GeneralizedHammer
lag: 0
hammer_tm1:
cls: GeneralizedHammer
lag: 1
hammer_tm5:
cls: GeneralizedHammer
lag: 5
mvq:
cls: MovingVolumeQuantile
window_size: 10
range:
cls: RangeIndex
window_size: 20
""", Loader=yaml.SafeLoader)
analyser_ref = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_indicators()
def test_ohlcv_indic_002(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
ohlcv_specs = yaml.load("""
indicators:
hammer_t:
cls: GeneralizedHammer
lag: 0
hammer_tm1:
cls: GeneralizedHammer
lag: 1
hammer_tm5:
cls: GeneralizedHammer
lag: 5
mvq:
cls: MovingVolumeQuantile
window_size: 10
range:
cls: RangeIndex
window_size: 20
""", Loader=yaml.SafeLoader)
analyser_ref = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_indicators()
data_ohlcv_p1_df = data_ohlcv_df.iloc[:50]
data_ohlcv_p2_df = data_ohlcv_df.iloc[50:]
analyser_1 = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_1.add_ohlcv_data(data_ohlcv_p1_df)
analyser_1.compute_indicators()
analyser_1.add_ohlcv_data(data_ohlcv_p2_df)
analyser_1.compute_indicators()
pd.testing.assert_frame_equal(
analyser_ref.ohlcv_df,
analyser_1.ohlcv_df)
pd.testing.assert_frame_equal(
analyser_ref.indic_df,
analyser_1.indic_df)
def test_ohlcv_indic_003(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
ohlcv_specs = yaml.load("""
indicators:
hammer_t:
cls: GeneralizedHammer
lag: 0
hammer_tm1:
cls: GeneralizedHammer
lag: 1
hammer_tm5:
cls: GeneralizedHammer
lag: 5
mvq:
cls: MovingVolumeQuantile
window_size: 10
range:
cls: RangeIndex
window_size: 20
""", Loader=yaml.SafeLoader)
analyser_ref = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_indicators()
data_ohlcv_p1_df = data_ohlcv_df.iloc[:50]
data_ohlcv_p2_df = data_ohlcv_df.iloc[50:]
analyser_1 = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_1.add_ohlcv_data(data_ohlcv_p1_df)
analyser_1.update_indicators()
analyser_1.add_ohlcv_data(data_ohlcv_p2_df)
analyser_1.update_indicators()
pd.testing.assert_frame_equal(
analyser_ref.ohlcv_df,
analyser_1.ohlcv_df)
pd.testing.assert_frame_equal(
analyser_ref.indic_df,
analyser_1.indic_df)
def test_ohlcv_indic_004(data_btc_usdc_1m_100):
data_ohlcv_df = data_btc_usdc_1m_100.copy()
ohlcv_specs = yaml.load("""
indicators:
hammer_t:
cls: GeneralizedHammer
lag: 0
hammer_tm1:
cls: GeneralizedHammer
lag: 1
hammer_tm5:
cls: GeneralizedHammer
lag: 5
mvq:
cls: MovingVolumeQuantile
window_size: 10
range:
cls: RangeIndex
window_size: 20
""", Loader=yaml.SafeLoader)
analyser_ref = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
analyser_ref.add_ohlcv_data(data_ohlcv_df)
analyser_ref.compute_indicators()
analyser_1 = ohlcv.ohlcvDataAnalyser(**copy.deepcopy(ohlcv_specs))
for ts in data_ohlcv_df.index:
analyser_1.add_ohlcv_data(data_ohlcv_df.loc[ts:ts])
analyser_1.update_indicators()
pd.testing.assert_frame_equal(
analyser_ref.ohlcv_df,
analyser_1.ohlcv_df)
pd.testing.assert_frame_equal(
analyser_ref.indic_df,
analyser_1.indic_df)
# data_train_df = data_df.iloc[:900]
# data_test_df = data_df.iloc[900:]
# model.fit(data_train_df)
# for var in model.model.variables.keys():
# var_cct_expected_filename = \
# os.path.join(EXPECTED_PATH,
# f"pure_bayesian_model_001_cct_{var}.csv")
# var_cct_df = model.model.get_cct(var, transpose=True)
# # Save expected data (once validated of course !)
# # var_cct_df.to_csv(var_cct_expected_filename,
# # index=len(var_cct_df.index) > 1)
# cct_expected_df = pd.read_csv(var_cct_expected_filename)
# if len(var_cct_df.index) > 1:
# cct_expected_df.set_index(var_cct_df.index.names,
# inplace=True)
# np.testing.assert_allclose(var_cct_df, cct_expected_df)
# # Ensure data_test_0 is directly a DataFrame.
# # Do not use intermediate Series to avoid dtypes problems
# data_test_0 = \
# data_test_df.loc[data_test_df.index[0:1], model.var_features]
# data_0_pred = model.predict(data_test_0)
# assert data_0_pred["ret_close_t2"]["scores"].values.tolist() == \
# [[0.493368700265252, 0.506631299734748]]
| 29.135255 | 79 | 0.654033 | 1,704 | 13,140 | 4.607981 | 0.103873 | 0.079088 | 0.063041 | 0.055018 | 0.838258 | 0.807565 | 0.798141 | 0.788462 | 0.760443 | 0.743505 | 0 | 0.035291 | 0.256012 | 13,140 | 450 | 80 | 29.2 | 0.767901 | 0.126941 | 0 | 0.799308 | 0 | 0 | 0.174836 | 0.002187 | 0 | 0 | 0 | 0.002222 | 0.062284 | 1 | 0.044983 | false | 0 | 0.038062 | 0 | 0.089965 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2982ff9eae02fa0f36c011520c424eb7547e8e42 | 1,870 | py | Python | tests/parser/integration/test_escrow.py | upgradvisor/vyper | 642884ea938a25793c1b2fac866e8458e63a7b49 | [
"Apache-2.0"
] | 1,471 | 2017-12-25T05:47:57.000Z | 2019-11-19T07:47:53.000Z | tests/parser/integration/test_escrow.py | upgradvisor/vyper | 642884ea938a25793c1b2fac866e8458e63a7b49 | [
"Apache-2.0"
] | 915 | 2019-11-21T05:48:16.000Z | 2022-03-31T23:51:03.000Z | tests/parser/integration/test_escrow.py | upgradvisor/vyper | 642884ea938a25793c1b2fac866e8458e63a7b49 | [
"Apache-2.0"
] | 321 | 2017-12-25T16:37:21.000Z | 2019-11-15T17:44:06.000Z | # from ethereum.tools import tester
def test_arbitration_code(w3, get_contract_with_gas_estimation, assert_tx_failed):
arbitration_code = """
buyer: address
seller: address
arbitrator: address
@external
def setup(_seller: address, _arbitrator: address):
if self.buyer == ZERO_ADDRESS:
self.buyer = msg.sender
self.seller = _seller
self.arbitrator = _arbitrator
@external
def finalize():
assert msg.sender == self.buyer or msg.sender == self.arbitrator
send(self.seller, self.balance)
@external
def refund():
assert msg.sender == self.seller or msg.sender == self.arbitrator
send(self.buyer, self.balance)
"""
a0, a1, a2 = w3.eth.accounts[:3]
c = get_contract_with_gas_estimation(arbitration_code, value=1)
c.setup(a1, a2, transact={})
assert_tx_failed(lambda: c.finalize(transact={"from": a1}))
c.finalize(transact={})
print("Passed escrow test")
def test_arbitration_code_with_init(w3, assert_tx_failed, get_contract_with_gas_estimation):
arbitration_code_with_init = """
buyer: address
seller: address
arbitrator: address
@external
@payable
def __init__(_seller: address, _arbitrator: address):
if self.buyer == ZERO_ADDRESS:
self.buyer = msg.sender
self.seller = _seller
self.arbitrator = _arbitrator
@external
def finalize():
assert msg.sender == self.buyer or msg.sender == self.arbitrator
send(self.seller, self.balance)
@external
def refund():
assert msg.sender == self.seller or msg.sender == self.arbitrator
send(self.buyer, self.balance)
"""
a0, a1, a2 = w3.eth.accounts[:3]
c = get_contract_with_gas_estimation(arbitration_code_with_init, *[a1, a2], value=1)
assert_tx_failed(lambda: c.finalize(transact={"from": a1}))
c.finalize(transact={"from": a0})
print("Passed escrow test with initializer")
| 27.910448 | 92 | 0.706952 | 247 | 1,870 | 5.1417 | 0.206478 | 0.070866 | 0.102362 | 0.056693 | 0.805512 | 0.783465 | 0.783465 | 0.704724 | 0.704724 | 0.658268 | 0 | 0.013592 | 0.173797 | 1,870 | 66 | 93 | 28.333333 | 0.808414 | 0.017647 | 0 | 0.730769 | 0 | 0 | 0.603815 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.038462 | false | 0.038462 | 0 | 0 | 0.038462 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4639d75549f1b810b28d5436cca10557e93bc9b4 | 5,004 | py | Python | model/transforms/autoregressive/ar_mixtures.py | didriknielsen/argmax_flows | 4ffff4bd6f7b25e20292eff6bad2bf5a962e8d39 | [
"MIT"
] | 34 | 2021-06-01T11:58:49.000Z | 2022-03-15T01:59:23.000Z | model/transforms/autoregressive/ar_mixtures.py | didriknielsen/argmax_flows | 4ffff4bd6f7b25e20292eff6bad2bf5a962e8d39 | [
"MIT"
] | 1 | 2022-03-15T01:47:00.000Z | 2022-03-15T01:47:00.000Z | model/transforms/autoregressive/ar_mixtures.py | didriknielsen/argmax_flows | 4ffff4bd6f7b25e20292eff6bad2bf5a962e8d39 | [
"MIT"
] | 2 | 2021-12-16T15:34:36.000Z | 2022-03-10T10:01:57.000Z | import torch
from survae.utils import sum_except_batch
from survae.transforms.bijections.functional.mixtures import gaussian_mixture_transform, logistic_mixture_transform, censored_logistic_mixture_transform
from survae.transforms.bijections.functional.mixtures import get_mixture_params
from .ar import AutoregressiveBijection
class GaussianMixtureAutoregressiveBijection(AutoregressiveBijection):
def __init__(self, ar_net, scheme, num_mixtures):
super(GaussianMixtureAutoregressiveBijection, self).__init__(ar_net=ar_net, scheme=scheme)
self.num_mixtures = num_mixtures
self.set_bisection_params()
def set_bisection_params(self, eps=1e-10, max_iters=100):
self.max_iters = max_iters
self.eps = eps
def _num_params(self):
return 3 * self.num_mixtures
def _elementwise(self, inputs, params, inverse):
assert params.shape[-1] == self._num_params()
logit_weights, means, log_scales = get_mixture_params(params, num_mixtures=self.num_mixtures)
x = gaussian_mixture_transform(inputs=inputs,
logit_weights=logit_weights,
means=means,
log_scales=log_scales,
eps=self.eps,
max_iters=self.max_iters,
inverse=inverse)
if inverse:
return x
else:
z, ldj_elementwise = x
ldj = sum_except_batch(ldj_elementwise)
return z, ldj
def _forward(self, x, params):
return self._elementwise(x, params, inverse=False)
def _element_inverse(self, z, element_params):
return self._elementwise(z, element_params, inverse=True)
class LogisticMixtureAutoregressiveBijection(AutoregressiveBijection):
def __init__(self, ar_net, scheme, num_mixtures):
super(LogisticMixtureAutoregressiveBijection, self).__init__(ar_net=ar_net, scheme=scheme)
self.num_mixtures = num_mixtures
self.set_bisection_params()
def set_bisection_params(self, eps=1e-10, max_iters=100):
self.max_iters = max_iters
self.eps = eps
def _num_params(self):
return 3 * self.num_mixtures
def _elementwise(self, inputs, params, inverse):
assert params.shape[-1] == self._num_params()
logit_weights, means, log_scales = get_mixture_params(params, num_mixtures=self.num_mixtures)
x = logistic_mixture_transform(inputs=inputs,
logit_weights=logit_weights,
means=means,
log_scales=log_scales,
eps=self.eps,
max_iters=self.max_iters,
inverse=inverse)
if inverse:
return x
else:
z, ldj_elementwise = x
ldj = sum_except_batch(ldj_elementwise)
return z, ldj
def _forward(self, x, params):
return self._elementwise(x, params, inverse=False)
def _element_inverse(self, z, element_params):
return self._elementwise(z, element_params, inverse=True)
class CensoredLogisticMixtureAutoregressiveBijection(AutoregressiveBijection):
def __init__(self, ar_net, scheme, num_mixtures, num_bins):
super(CensoredLogisticMixtureAutoregressiveBijection, self).__init__(ar_net=ar_net, scheme=scheme)
self.num_mixtures = num_mixtures
self.num_bins = num_bins
self.set_bisection_params()
def set_bisection_params(self, eps=1e-10, max_iters=100):
self.max_iters = max_iters
self.eps = eps
def _num_params(self):
return 3 * self.num_mixtures
def _elementwise(self, inputs, params, inverse):
assert params.shape[-1] == self._num_params()
logit_weights, means, log_scales = get_mixture_params(params, num_mixtures=self.num_mixtures)
x = censored_logistic_mixture_transform(inputs=inputs,
logit_weights=logit_weights,
means=means,
log_scales=log_scales,
num_bins=self.num_bins,
eps=self.eps,
max_iters=self.max_iters,
inverse=inverse)
if inverse:
return x
else:
z, ldj_elementwise = x
ldj = sum_except_batch(ldj_elementwise)
return z, ldj
def _forward(self, x, params):
return self._elementwise(x, params, inverse=False)
def _element_inverse(self, z, element_params):
return self._elementwise(z, element_params, inverse=True)
| 38.198473 | 152 | 0.59952 | 521 | 5,004 | 5.426104 | 0.120921 | 0.070039 | 0.047754 | 0.057305 | 0.828086 | 0.828086 | 0.828086 | 0.789883 | 0.789883 | 0.770074 | 0 | 0.007162 | 0.330336 | 5,004 | 130 | 153 | 38.492308 | 0.836467 | 0 | 0 | 0.824742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030928 | 1 | 0.185567 | false | 0 | 0.051546 | 0.092784 | 0.42268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4682abc8b93c585ccd2794c42bcb41389ae3f9a9 | 160 | py | Python | SimCompanies/contracts.py | Gunak/SimCompaniesCLI | 87ea68b63e1a82013caf30849b89600585236945 | [
"MIT"
] | 4 | 2021-02-25T17:27:38.000Z | 2021-07-14T09:00:25.000Z | SimCompanies/contracts.py | Gunak/SimCompanies | 87ea68b63e1a82013caf30849b89600585236945 | [
"MIT"
] | null | null | null | SimCompanies/contracts.py | Gunak/SimCompanies | 87ea68b63e1a82013caf30849b89600585236945 | [
"MIT"
] | null | null | null | incoming_contracts = "https://www.simcompanies.com/api/v2/contracts-incoming/"
outgoing_contracts = "https://www.simcompanies.com/api/v2/contracts-outgoing/"
| 32 | 78 | 0.7875 | 20 | 160 | 6.2 | 0.45 | 0.225806 | 0.274194 | 0.467742 | 0.741935 | 0.741935 | 0.741935 | 0.741935 | 0 | 0 | 0 | 0.013158 | 0.05 | 160 | 4 | 79 | 40 | 0.802632 | 0 | 0 | 0 | 0 | 0 | 0.691824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
468362f08e4f31d9a33553af1de0190c94729abc | 6,821 | py | Python | loldib/getratings/models/NA/na_vladimir/na_vladimir_bot.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_vladimir/na_vladimir_bot.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_vladimir/na_vladimir_bot.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Vladimir_Bot_Aatrox(Ratings):
pass
class NA_Vladimir_Bot_Ahri(Ratings):
pass
class NA_Vladimir_Bot_Akali(Ratings):
pass
class NA_Vladimir_Bot_Alistar(Ratings):
pass
class NA_Vladimir_Bot_Amumu(Ratings):
pass
class NA_Vladimir_Bot_Anivia(Ratings):
pass
class NA_Vladimir_Bot_Annie(Ratings):
pass
class NA_Vladimir_Bot_Ashe(Ratings):
pass
class NA_Vladimir_Bot_AurelionSol(Ratings):
pass
class NA_Vladimir_Bot_Azir(Ratings):
pass
class NA_Vladimir_Bot_Bard(Ratings):
pass
class NA_Vladimir_Bot_Blitzcrank(Ratings):
pass
class NA_Vladimir_Bot_Brand(Ratings):
pass
class NA_Vladimir_Bot_Braum(Ratings):
pass
class NA_Vladimir_Bot_Caitlyn(Ratings):
pass
class NA_Vladimir_Bot_Camille(Ratings):
pass
class NA_Vladimir_Bot_Cassiopeia(Ratings):
pass
class NA_Vladimir_Bot_Chogath(Ratings):
pass
class NA_Vladimir_Bot_Corki(Ratings):
pass
class NA_Vladimir_Bot_Darius(Ratings):
pass
class NA_Vladimir_Bot_Diana(Ratings):
pass
class NA_Vladimir_Bot_Draven(Ratings):
pass
class NA_Vladimir_Bot_DrMundo(Ratings):
pass
class NA_Vladimir_Bot_Ekko(Ratings):
pass
class NA_Vladimir_Bot_Elise(Ratings):
pass
class NA_Vladimir_Bot_Evelynn(Ratings):
pass
class NA_Vladimir_Bot_Ezreal(Ratings):
pass
class NA_Vladimir_Bot_Fiddlesticks(Ratings):
pass
class NA_Vladimir_Bot_Fiora(Ratings):
pass
class NA_Vladimir_Bot_Fizz(Ratings):
pass
class NA_Vladimir_Bot_Galio(Ratings):
pass
class NA_Vladimir_Bot_Gangplank(Ratings):
pass
class NA_Vladimir_Bot_Garen(Ratings):
pass
class NA_Vladimir_Bot_Gnar(Ratings):
pass
class NA_Vladimir_Bot_Gragas(Ratings):
pass
class NA_Vladimir_Bot_Graves(Ratings):
pass
class NA_Vladimir_Bot_Hecarim(Ratings):
pass
class NA_Vladimir_Bot_Heimerdinger(Ratings):
pass
class NA_Vladimir_Bot_Illaoi(Ratings):
pass
class NA_Vladimir_Bot_Irelia(Ratings):
pass
class NA_Vladimir_Bot_Ivern(Ratings):
pass
class NA_Vladimir_Bot_Janna(Ratings):
pass
class NA_Vladimir_Bot_JarvanIV(Ratings):
pass
class NA_Vladimir_Bot_Jax(Ratings):
pass
class NA_Vladimir_Bot_Jayce(Ratings):
pass
class NA_Vladimir_Bot_Jhin(Ratings):
pass
class NA_Vladimir_Bot_Jinx(Ratings):
pass
class NA_Vladimir_Bot_Kalista(Ratings):
pass
class NA_Vladimir_Bot_Karma(Ratings):
pass
class NA_Vladimir_Bot_Karthus(Ratings):
pass
class NA_Vladimir_Bot_Kassadin(Ratings):
pass
class NA_Vladimir_Bot_Katarina(Ratings):
pass
class NA_Vladimir_Bot_Kayle(Ratings):
pass
class NA_Vladimir_Bot_Kayn(Ratings):
pass
class NA_Vladimir_Bot_Kennen(Ratings):
pass
class NA_Vladimir_Bot_Khazix(Ratings):
pass
class NA_Vladimir_Bot_Kindred(Ratings):
pass
class NA_Vladimir_Bot_Kled(Ratings):
pass
class NA_Vladimir_Bot_KogMaw(Ratings):
pass
class NA_Vladimir_Bot_Leblanc(Ratings):
pass
class NA_Vladimir_Bot_LeeSin(Ratings):
pass
class NA_Vladimir_Bot_Leona(Ratings):
pass
class NA_Vladimir_Bot_Lissandra(Ratings):
pass
class NA_Vladimir_Bot_Lucian(Ratings):
pass
class NA_Vladimir_Bot_Lulu(Ratings):
pass
class NA_Vladimir_Bot_Lux(Ratings):
pass
class NA_Vladimir_Bot_Malphite(Ratings):
pass
class NA_Vladimir_Bot_Malzahar(Ratings):
pass
class NA_Vladimir_Bot_Maokai(Ratings):
pass
class NA_Vladimir_Bot_MasterYi(Ratings):
pass
class NA_Vladimir_Bot_MissFortune(Ratings):
pass
class NA_Vladimir_Bot_MonkeyKing(Ratings):
pass
class NA_Vladimir_Bot_Mordekaiser(Ratings):
pass
class NA_Vladimir_Bot_Morgana(Ratings):
pass
class NA_Vladimir_Bot_Nami(Ratings):
pass
class NA_Vladimir_Bot_Nasus(Ratings):
pass
class NA_Vladimir_Bot_Nautilus(Ratings):
pass
class NA_Vladimir_Bot_Nidalee(Ratings):
pass
class NA_Vladimir_Bot_Nocturne(Ratings):
pass
class NA_Vladimir_Bot_Nunu(Ratings):
pass
class NA_Vladimir_Bot_Olaf(Ratings):
pass
class NA_Vladimir_Bot_Orianna(Ratings):
pass
class NA_Vladimir_Bot_Ornn(Ratings):
pass
class NA_Vladimir_Bot_Pantheon(Ratings):
pass
class NA_Vladimir_Bot_Poppy(Ratings):
pass
class NA_Vladimir_Bot_Quinn(Ratings):
pass
class NA_Vladimir_Bot_Rakan(Ratings):
pass
class NA_Vladimir_Bot_Rammus(Ratings):
pass
class NA_Vladimir_Bot_RekSai(Ratings):
pass
class NA_Vladimir_Bot_Renekton(Ratings):
pass
class NA_Vladimir_Bot_Rengar(Ratings):
pass
class NA_Vladimir_Bot_Riven(Ratings):
pass
class NA_Vladimir_Bot_Rumble(Ratings):
pass
class NA_Vladimir_Bot_Ryze(Ratings):
pass
class NA_Vladimir_Bot_Sejuani(Ratings):
pass
class NA_Vladimir_Bot_Shaco(Ratings):
pass
class NA_Vladimir_Bot_Shen(Ratings):
pass
class NA_Vladimir_Bot_Shyvana(Ratings):
pass
class NA_Vladimir_Bot_Singed(Ratings):
pass
class NA_Vladimir_Bot_Sion(Ratings):
pass
class NA_Vladimir_Bot_Sivir(Ratings):
pass
class NA_Vladimir_Bot_Skarner(Ratings):
pass
class NA_Vladimir_Bot_Sona(Ratings):
pass
class NA_Vladimir_Bot_Soraka(Ratings):
pass
class NA_Vladimir_Bot_Swain(Ratings):
pass
class NA_Vladimir_Bot_Syndra(Ratings):
pass
class NA_Vladimir_Bot_TahmKench(Ratings):
pass
class NA_Vladimir_Bot_Taliyah(Ratings):
pass
class NA_Vladimir_Bot_Talon(Ratings):
pass
class NA_Vladimir_Bot_Taric(Ratings):
pass
class NA_Vladimir_Bot_Teemo(Ratings):
pass
class NA_Vladimir_Bot_Thresh(Ratings):
pass
class NA_Vladimir_Bot_Tristana(Ratings):
pass
class NA_Vladimir_Bot_Trundle(Ratings):
pass
class NA_Vladimir_Bot_Tryndamere(Ratings):
pass
class NA_Vladimir_Bot_TwistedFate(Ratings):
pass
class NA_Vladimir_Bot_Twitch(Ratings):
pass
class NA_Vladimir_Bot_Udyr(Ratings):
pass
class NA_Vladimir_Bot_Urgot(Ratings):
pass
class NA_Vladimir_Bot_Varus(Ratings):
pass
class NA_Vladimir_Bot_Vayne(Ratings):
pass
class NA_Vladimir_Bot_Veigar(Ratings):
pass
class NA_Vladimir_Bot_Velkoz(Ratings):
pass
class NA_Vladimir_Bot_Vi(Ratings):
pass
class NA_Vladimir_Bot_Viktor(Ratings):
pass
class NA_Vladimir_Bot_Vladimir(Ratings):
pass
class NA_Vladimir_Bot_Volibear(Ratings):
pass
class NA_Vladimir_Bot_Warwick(Ratings):
pass
class NA_Vladimir_Bot_Xayah(Ratings):
pass
class NA_Vladimir_Bot_Xerath(Ratings):
pass
class NA_Vladimir_Bot_XinZhao(Ratings):
pass
class NA_Vladimir_Bot_Yasuo(Ratings):
pass
class NA_Vladimir_Bot_Yorick(Ratings):
pass
class NA_Vladimir_Bot_Zac(Ratings):
pass
class NA_Vladimir_Bot_Zed(Ratings):
pass
class NA_Vladimir_Bot_Ziggs(Ratings):
pass
class NA_Vladimir_Bot_Zilean(Ratings):
pass
class NA_Vladimir_Bot_Zyra(Ratings):
pass
| 16.357314 | 46 | 0.776133 | 972 | 6,821 | 5.020576 | 0.151235 | 0.197951 | 0.42418 | 0.509016 | 0.814139 | 0.814139 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162879 | 6,821 | 416 | 47 | 16.396635 | 0.854641 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
46af34ffc340884b04324a15d10edc67c92971c6 | 22,041 | py | Python | app/pyOption/mcs_option_class.py | lyhrobin00007/FlaskCTA | 5c93017d2432d0e8d0c1b919d9b9011bd2029a7f | [
"MIT"
] | null | null | null | app/pyOption/mcs_option_class.py | lyhrobin00007/FlaskCTA | 5c93017d2432d0e8d0c1b919d9b9011bd2029a7f | [
"MIT"
] | null | null | null | app/pyOption/mcs_option_class.py | lyhrobin00007/FlaskCTA | 5c93017d2432d0e8d0c1b919d9b9011bd2029a7f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Nov 10 17:19:03 2016
@author: 024536
"""
from time import time
from math import exp, sqrt
from random import gauss, seed
from numba import jit
import numpy as np
class mcsOptionClass(object):
''' Class for Path options in mcmc model.
Attributes
==========
S0 : float
initial stock/index level
K1 : float
strike price
K2 : float
strike price
T : float
maturity (in year fractions)
r : float
constant risk-free short rate
q : float
constant dividended rate
N : int
numbers of option pay terms
Rp: float
fixed pay off rate of option
sigma : float
volatility factor in diffusion term
optionType : string
'call' or 'put'
I : int
number of paths
M : int
number of time steps
bp : float
basic point
Methods
=======
value : float
return present value of option
delta : float
return delta of option
theta : float
return theta of option
gamma : float
return gamma of option
vega : float
return Vega of option
'''
def __init__(self,S0,K1,K2,T,r,q,sigma,N,Rp,optionType,optionStyle='BullSpreadPathN',I=25000,M=100,seedNum=2000):
self.S0 = S0
self.K1 = K1
self.K2 = K2
self.T = T
self.r = r
self.q = q
self.sigma = sigma
self.N = N
self.Rp = Rp
self.optionType=optionType
self.I = I
self.M = M
self.bp = 0.0001
seed(seedNum)
print optionStyle
if optionStyle == 'BullSpreadPathN':
self.value_ = self.valueBullSpreadPathN
elif optionStyle == 'DoubleNoTouch':
self.value_ = self.valueDoubleNoTouch
elif optionStyle == 'OutOfRangeRate':
self.value_ = self.valueOutOfRangeRate
elif optionStyle == 'DownAndOutAlternative':
self.value_ = self.valueDownAndOutAlternative
elif optionStyle == 'ModerateOption':
self.value_ = self.valueModerateOption
# else:
# self.value_ = self.valueBullSpreadPathN
@jit
def valueBullSpreadPathN(self,S0,T,r,sigma):
# Parameter
K1 = self.K1
K2 = self.K2
q = self.q
N = self.N
I = self.I
M = self.M
dt = T/M
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,N))
Ts = range(N)
for i in range(0,N):
Ts[i] = int(round(M*(i+1)/float(N)))
for i in range(N):
CT[:,i] = S[:,Ts[i]]
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = np.minimum(K2-K1,np.maximum(CT.mean(axis=1)-K1,0))
value['call'] = exp(-r*T)*payOff['call'].mean()
# Put Spread
payOff['put'] = np.minimum(K2-K1,np.maximum(K2-CT.mean(axis=1),0))
value['put'] = exp(-r*T)*payOff['put'].mean()
return value
@jit
def valueDoubleNoTouch(self,S0,T,r,sigma):
# Parameter
K1 = self.K1
K2 = self.K2
q = self.q
I = self.I
M = self.M
dt = T/M
Rp = self.Rp
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,1))
CT = ((S<K1)+(S>K2)).sum(axis=1)==0
PT = np.zeros((I,1))
PT = ((S>=K1)+(S<=K2)).sum(axis=1)==0
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = CT.mean()*Rp
value['call'] = exp(-r*T)*payOff['call']
# Put Spread
payOff['put'] = PT.mean()*Rp
value['put'] = exp(-r*T)*payOff['put']
return value
@jit
def valueOutOfRangeRate(self,S0,T,r,sigma):
# Parameter
K1 = self.K1
K2 = self.K2
q = self.q
I = self.I
M = self.M
dt = T/M
Rp = self.Rp
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,1))
CT = ((S>=K1)*(S<=K2)).mean(axis=1)
PT = np.zeros((I,1))
PT = ((S<K1)+(S>K2)).mean(axis=1)
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = CT.mean()*Rp
value['call'] = exp(-r*T)*payOff['call']
# Put Spread
payOff['put'] = PT.mean()*Rp
value['put'] = exp(-r*T)*payOff['put']
return value
@jit
def valueDownAndOutAlternative(self,S0,T,r,sigma):
# Parameter
K1 = self.K1
# K2 = self.K2
q = self.q
I = self.I
M = self.M
dt = T/M
Rp = self.Rp
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,1))
CT = ((S.min(axis=1)<K1)*(S[:,-1]<S[:,0]))==0
PT = np.zeros((I,1))
PT = ((S.min(axis=1)<K1)*(S[:,-1]<S[:,0]))==1
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = CT.mean()*Rp
value['call'] = exp(-r*T)*payOff['call']
# Put Spread
payOff['put'] = PT.mean()*Rp
value['put'] = exp(-r*T)*payOff['put']
return value
@jit
def valueModerateOption(self,S0,T,r,sigma):
# Parameter
K1 = self.K1
K2 = self.K2
q = self.q
I = self.I
M = self.M
dt = T/M
Rp = self.Rp
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,3))
CT[:,0] = S.max(axis=1)>=K2
CT[:,1] = ((S.max(axis=1)<K2)*(S[:,-1]<=S[:,0]))
CT[:,2] = ((S.max(axis=1)<K2)*(S[:,-1]>S[:,0])*(S[:,-1]/S[:,0]-1))
# PT = np.zeros((I,M))
# PT = ((S.min(axis=1)<K1)*(S[:,-1]<S[:,0]))==1
PT = np.zeros((I,3))
PT[:,0] = S.max(axis=1)<=K1
PT[:,1] = ((S.max(axis=1)<K1)*(S[:,-1]>=S[:,0]))
PT[:,2] = ((S.max(axis=1)<K1)*(S[:,-1]<S[:,0])*(1-S[:,-1]/S[:,0]))
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = CT[:,0].mean()*Rp+CT[:,2].mean()
value['call'] = exp(-r*T)*payOff['call']
# Put Spread
payOff['put'] = PT[:,0].mean()*Rp+PT[:,2].mean()
value['put'] = exp(-r*T)*payOff['put']
return value
def value(self):
value = self.value_(self.S0,self.T,self.r,self.sigma)
return value[self.optionType]
def delta(self):
value_left = self.value_(self.S0-self.bp,self.T,self.r,self.sigma)
value_right = self.value_(self.S0+self.bp,self.T,self.r,self.sigma)
return (value_right[self.optionType]-value_left[self.optionType])/(2*self.bp)
def theta(self):
value_left = self.value_(self.S0,self.T-self.bp,self.r,self.sigma)
value_right = self.value_(self.S0,self.T+self.bp,self.r,self.sigma)
return (value_right[self.optionType]-value_left[self.optionType])/(2*self.bp)
def gamma(self):
value_left = self.value_(self.S0,self.T,self.r-self.bp,self.sigma)
value_right = self.value_(self.S0,self.T,self.r+self.bp,self.sigma)
return (value_right[self.optionType]-value_left[self.optionType])/(2*self.bp)
def vega(self):
value_left = self.value_(self.S0,self.T,self.r,self.sigma-self.bp)
value_right = self.value_(self.S0,self.T,self.r,self.sigma+self.bp)
return (value_right[self.optionType]-value_left[self.optionType])/(2*self.bp)
def dictResult(self):
value_ = self.value()
delta_ = self.delta()
theta_ = self.theta()
gamma_ = self.gamma()
vega_ = self.vega()
return {'value' : "%.6f"%value_,'valueRatio' : "%.4f%%"%(value_/self.S0*100),
'delta' : "%.6f"%delta_,'deltaRatio' : "%.4f%%"%(delta_/self.S0*100),
'theta' : "%.6f"%theta_,'thetaRatio' : "%.4f%%"%(theta_/self.S0*100),
'gamma' : "%.6f"%gamma_,'gammaRatio' : "%.4f%%"%(gamma_/self.S0*100),
'vega' : "%.6f"%vega_ ,'vegaRatio' : "%.4f%%"%(vega_ /self.S0*100) }
class McsBaseOptionClass(object):
''' Class for Path options in mcmc model.
Attributes
==========
S0 : float
initial stock/index level
K1 : float
strike price
K2 : float
strike price
T : float
maturity (in year fractions)
r : float
constant risk-free short rate
q : float
constant dividended rate
N : int
numbers of option pay terms
Rp: float
fixed pay off rate of option
sigma : float
volatility factor in diffusion term
optionType : string
'call' or 'put'
I : int
number of paths
M : int
number of time steps
bp : float
basic point
Methods
=======
value : float
return present value of option
delta : float
return delta of option
theta : float
return theta of option
gamma : float
return gamma of option
vega : float
return Vega of option
'''
detail = []
def __init__(self):
pass
@jit
def value_(self,S0,T,r,sigma):
return {'call':0.0,'put':0.0}
def value(self):
value = self.value_(self.S0,self.T,self.r,self.sigma)
return value[self.optionType]
def delta(self):
value_left = self.value_(self.S0-self.bp,self.T,self.r,self.sigma)
value_right = self.value_(self.S0+self.bp,self.T,self.r,self.sigma)
return (value_right[self.optionType]-value_left[self.optionType])/(2*self.bp)
def theta(self):
value_left = self.value_(self.S0,self.T-self.bp,self.r,self.sigma)
value_right = self.value_(self.S0,self.T+self.bp,self.r,self.sigma)
return (value_right[self.optionType]-value_left[self.optionType])/(2*self.bp)
def gamma(self):
value_left = self.value_(self.S0,self.T,self.r-self.bp,self.sigma)
value_right = self.value_(self.S0,self.T,self.r+self.bp,self.sigma)
return (value_right[self.optionType]-value_left[self.optionType])/(2*self.bp)
def vega(self):
value_left = self.value_(self.S0,self.T,self.r,self.sigma-self.bp)
value_right = self.value_(self.S0,self.T,self.r,self.sigma+self.bp)
return (value_right[self.optionType]-value_left[self.optionType])/(2*self.bp)
def dictResult(self):
value_ = self.value()
delta_ = self.delta()
theta_ = self.theta()
gamma_ = self.gamma()
vega_ = self.vega()
return {'value' : "%.6f"%value_,'valueRatio' : "%.4f%%"%(value_/self.S0*100),
'delta' : "%.6f"%delta_,'deltaRatio' : "%.4f%%"%(delta_/self.S0*100),
'theta' : "%.6f"%theta_,'thetaRatio' : "%.4f%%"%(theta_/self.S0*100),
'gamma' : "%.6f"%gamma_,'gammaRatio' : "%.4f%%"%(gamma_/self.S0*100),
'vega' : "%.6f"%vega_ ,'vegaRatio' : "%.4f%%"%(vega_ /self.S0*100) }
class BullSpreadPathNClass(McsBaseOptionClass):
def __init__(self,S0,K1,K2,T,r,q,sigma,N,optionType,I=25000,M=100,seedNum=2000):
self.S0,self.T,self.r,self.q,self.sigma,self.optionType = S0,T,r,q,sigma,optionType
self.I,self.M,self.bp = I,M,0.0001
seed(seedNum)
self.K1,self.K2,self.N = K1,K2,N
@jit
def value_(self,S0,T,r,sigma):
# Parameter
K1 = self.K1
K2 = self.K2
q = self.q
N = self.N
I = self.I
M = self.M
dt = T/M
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,N))
Ts = range(N)
for i in range(0,N):
Ts[i] = int(round(M*(i+1)/float(N)))
for i in range(N):
CT[:,i] = S[:,Ts[i]]
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = np.minimum(K2-K1,np.maximum(CT.mean(axis=1)-K1,0))
value['call'] = exp(-r*T)*payOff['call'].mean()
# Put Spread
payOff['put'] = np.minimum(K2-K1,np.maximum(K2-CT.mean(axis=1),0))
value['put'] = exp(-r*T)*payOff['put'].mean()
return value
class DoubleNoTouchClass(McsBaseOptionClass):
def __init__(self,S0,K1,K2,T,r,q,sigma,Rp,optionType,I=25000,M=100,seedNum=2000):
self.S0,self.T,self.r,self.q,self.sigma,self.optionType = S0,T,r,q,sigma,optionType
self.I,self.M,self.bp = I,M,0.0001
seed(seedNum)
self.K1,self.K2,self.Rp = K1,K2,Rp
@jit
def value_(self,S0,T,r,sigma):
# Parameter
K1 = self.K1
K2 = self.K2
q = self.q
I = self.I
M = self.M
dt = T/M
Rp = self.Rp
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,1))
CT = ((S<K1)+(S>K2)).sum(axis=1)==0
PT = np.zeros((I,1))
PT = ((S>=K1)+(S<=K2)).sum(axis=1)==0
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = CT.mean()*Rp
value['call'] = exp(-r*T)*payOff['call']
# Put Spread
payOff['put'] = PT.mean()*Rp
value['put'] = exp(-r*T)*payOff['put']
return value
class OutOfRangeRateClass(McsBaseOptionClass):
def __init__(self,S0,K1,K2,T,r,q,sigma,Rp,optionType,I=25000,M=100,seedNum=2000):
self.S0,self.T,self.r,self.q,self.sigma,self.optionType = S0,T,r,q,sigma,optionType
self.I,self.M,self.bp = I,M,0.0001
seed(seedNum)
self.K1,self.K2,self.Rp = K1,K2,Rp
@jit
def value_(self,S0,T,r,sigma):
# Parameter
K1 = self.K1
K2 = self.K2
q = self.q
I = self.I
M = self.M
dt = T/M
Rp = self.Rp
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,1))
CT = ((S>=K1)*(S<=K2)).mean(axis=1)
PT = np.zeros((I,1))
PT = ((S<K1)+(S>K2)).mean(axis=1)
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = CT.mean()*Rp
value['call'] = exp(-r*T)*payOff['call']
# Put Spread
payOff['put'] = PT.mean()*Rp
value['put'] = exp(-r*T)*payOff['put']
return value
class DownAndOutAlternativeClass(McsBaseOptionClass):
def __init__(self,S0,K,T,r,q,sigma,Rp,optionType,I=25000,M=100,seedNum=2000):
self.S0,self.T,self.r,self.q,self.sigma,self.optionType = S0,T,r,q,sigma,optionType
self.I,self.M,self.bp = I,M,0.0001
seed(seedNum)
self.K,self.Rp = K,Rp
@jit
def value_(self,S0,T,r,sigma):
# Parameter
K = self.K
q = self.q
I = self.I
M = self.M
dt = T/M
Rp = self.Rp
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,1))
CT = ((S.min(axis=1)<K)*(S[:,-1]<S[:,0]))==0
PT = np.zeros((I,1))
PT = ((S.min(axis=1)>K)*(S[:,-1]>S[:,0]))==0
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = CT.mean()*Rp
value['call'] = exp(-r*T)*payOff['call']
# Put Spread
payOff['put'] = PT.mean()*Rp
value['put'] = exp(-r*T)*payOff['put']
return value
class ModerateOptionClass(McsBaseOptionClass):
def __init__(self,S0,K,T,r,q,sigma,Rp,optionType,I=25000,M=100,seedNum=2000):
self.S0,self.T,self.r,self.q,self.sigma,self.optionType = S0,T,r,q,sigma,optionType
self.I,self.M,self.bp = I,M,0.0001
seed(seedNum)
self.K,self.Rp = K,Rp
@jit
def value_(self,S0,T,r,sigma):
# Parameter
K = self.K
q = self.q
I = self.I
M = self.M
dt = T/M
Rp = self.Rp
# Simulating I paths with M time steps
S = np.zeros((I,M+1))
for i in range(I):
for t in range(M+1):
if t == 0:
S[i,t] = S0
else:
z = gauss(0.0, 1.0)
St = S[i,t-1]*exp((r-q-0.5*sigma**2)*dt+sigma*sqrt(dt)*z)
S[i,t] = St
# Calculating the Monte Carlo estimator
CT = np.zeros((I,3))
CT[:,0] = S.max(axis=1)>=K
CT[:,1] = ((S.max(axis=1)<K)*(S[:,-1]<=S[:,0]))
CT[:,2] = ((S.max(axis=1)<K)*(S[:,-1]>S[:,0])*(S[:,-1]/S[:,0]-1))
# PT = np.zeros((I,M))
# PT = ((S.min(axis=1)<K1)*(S[:,-1]<S[:,0]))==1
PT = np.zeros((I,3))
PT[:,0] = S.max(axis=1)<=K
PT[:,1] = ((S.max(axis=1)<K)*(S[:,-1]>=S[:,0]))
PT[:,2] = ((S.max(axis=1)<K)*(S[:,-1]<S[:,0])*(1-S[:,-1]/S[:,0]))
# Terminal pay off
payOff,value = {},{}
# Call Spread
payOff['call'] = CT[:,0].mean()*Rp+CT[:,2].mean()
value['call'] = exp(-r*T)*payOff['call']
# Put Spread
payOff['put'] = PT[:,0].mean()*Rp+PT[:,2].mean()
value['put'] = exp(-r*T)*payOff['put']
return value
if __name__ == "__main__":
# S0 = 100.0
# K1 = 100
# K2 = 105
# T = 91/365.0
# r = 0.00
# q = 0.00
# sigma = 0.2
# N = 3
# Rp = 1
# optionType='call'
# optionStyle = 'DoubleKnockOut'
# I = 100
S0 = 0.029893
K1 = 0.02839835
# K2 = 0.03138765
K2 = S0*1.1
T = 91/365.0
r = 0.03
q = 0.00
sigma = 0.13
N = 91
Rp = 1
M = 91
optionType='call'
optionStyle = 'ModerateOption'
I = 250000
ts = []
ts.append(time())
# option = BullSpreadPathNClass(S0,K1,K2,T,r,q,sigma,N,optionType,I,M)
# option = DoubleNoTouchClass(S0,K1,K2,T,r,q,sigma,Rp,optionType,I,M)
# option = OutOfRangeRateClass(S0,K1,K2,T,r,q,sigma,Rp,optionType,I,M)
# option = DownAndOutAlternativeClass(S0,K1,T,r,q,sigma,Rp,optionType,I,M)
option = ModerateOptionClass(S0,K2,T,r,q,sigma,Rp,optionType,I,M)
print 'value',option.value()
print time()-ts[-1]
option2 = mcsOptionClass(S0,K1,K2,T,r,q,sigma,N,Rp,optionType,optionStyle,I,M)
ts.append(time())
print 'value',option2.value()
print time()-ts[-1]
# ts.append(time())
# print 'theta',option.theta()
# print time()-ts[-1]
# ts.append(time())
# print 'gamma',option.gamma()
# print time()-ts[-1]
# ts.append(time())
# print 'vega',option.vega()
# print time()-ts[-1]
# print time()-ts[0]
| 31.397436 | 117 | 0.486593 | 3,170 | 22,041 | 3.342902 | 0.05836 | 0.028876 | 0.022648 | 0.020761 | 0.867793 | 0.863452 | 0.860621 | 0.860621 | 0.847598 | 0.840049 | 0 | 0.048305 | 0.344404 | 22,041 | 701 | 118 | 31.442225 | 0.685052 | 0.102219 | 0 | 0.811791 | 0 | 0 | 0.033403 | 0.001187 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.002268 | 0.011338 | null | null | 0.011338 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
46b53dcc48087abb51f5568cb2da5838c9c9c159 | 87 | py | Python | BuildSimHubAPI/htmlParser/__init__.py | shihao-zhang/buildsimhub_python_api | daa0b7d2e92820b6b1cdaa981fb9f0d88c375012 | [
"MIT"
] | 19 | 2018-02-27T22:58:04.000Z | 2022-02-21T15:03:59.000Z | BuildSimHubAPI/htmlParser/__init__.py | shihao-zhang/buildsimhub_python_api | daa0b7d2e92820b6b1cdaa981fb9f0d88c375012 | [
"MIT"
] | 11 | 2018-02-15T16:47:53.000Z | 2018-12-19T18:33:20.000Z | BuildSimHubAPI/htmlParser/__init__.py | shihao-zhang/buildsimhub_python_api | daa0b7d2e92820b6b1cdaa981fb9f0d88c375012 | [
"MIT"
] | 11 | 2018-01-26T02:12:38.000Z | 2019-09-29T12:05:31.000Z | from .html_utility import extract_value_from_table
from .html_utility import save_html
| 29 | 50 | 0.885057 | 14 | 87 | 5.071429 | 0.571429 | 0.225352 | 0.422535 | 0.591549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091954 | 87 | 2 | 51 | 43.5 | 0.898734 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
d3b3fa0f42566a48cce51a8644bbc3a6a239d908 | 3,591 | py | Python | hmm/tests/test_markov_chain.py | dkohlsdorf/hidden_markov_models | 1060d4fb6b6fcf85c4140be1caec1531d51cacf8 | [
"MIT"
] | null | null | null | hmm/tests/test_markov_chain.py | dkohlsdorf/hidden_markov_models | 1060d4fb6b6fcf85c4140be1caec1531d51cacf8 | [
"MIT"
] | null | null | null | hmm/tests/test_markov_chain.py | dkohlsdorf/hidden_markov_models | 1060d4fb6b6fcf85c4140be1caec1531d51cacf8 | [
"MIT"
] | null | null | null | import unittest
from hmm.markov_chain import Transition, MarkovChain, DenseMarkovChain
from hmm.logprob import *
class DenseMarkovChainTest(unittest.TestCase):
LEFT_RIGHT = [
[0.6, 0.4, 0.0],
[0.0, 0.6, 0.4],
[0.0, 0.0, 0.6]
]
def test_from_probs(self):
chain = DenseMarkovChain.from_probs(DenseMarkovChainTest.LEFT_RIGHT)
for i in range(0, 3):
for j in range(0, 3):
trans = Transition(i, j)
if i == j:
self.assertAlmostEqual(chain[trans].prob, LogProb.from_float(0.6).prob, delta=1e-8)
elif j == i + 1:
self.assertAlmostEqual(chain[trans].prob, LogProb.from_float(0.4).prob, delta=1e-8)
else:
self.assertAlmostEqual(chain[trans].prob, ZERO, delta=1e-8)
def test_getitem(self):
chain = DenseMarkovChain.from_probs(DenseMarkovChainTest.LEFT_RIGHT)
self.assertAlmostEqual(chain[Transition(0,0)].prob, LogProb.from_float(0.6).prob, delta=1e-8)
self.assertAlmostEqual(chain[Transition(0,1)].prob, LogProb.from_float(0.4).prob, delta=1e-8)
self.assertAlmostEqual(chain[Transition(0,2)].prob, ZERO, delta=1e-8)
def test_setitem(self):
chain = DenseMarkovChain(2)
chain[Transition(0,0)] = LogProb.from_float(1.0)
chain[Transition(1,0)] += LogProb.from_float(2.0)
chain[Transition(1,0)] += LogProb.from_float(3.0)
self.assertAlmostEqual(chain[Transition(0,0)].prob, LogProb.from_float(1.0).prob, delta=1e-8)
self.assertAlmostEqual(chain[Transition(1,0)].prob, LogProb.from_float(5.0).prob, delta=1e-8)
def test_n_states(self):
chain = MarkovChain.from_probs(MarkovChainTest.LEFT_RIGHT)
self.assertEqual(chain.n_states, 3)
class MarkovChainTest(unittest.TestCase):
LEFT_RIGHT = [
[0.6, 0.4, 0.0],
[0.0, 0.6, 0.4],
[0.0, 0.0, 0.6]
]
def test_from_probs(self):
chain = MarkovChain.from_probs(MarkovChainTest.LEFT_RIGHT)
for i in range(0, 3):
for j in range(0, 3):
trans = Transition(i, j)
if i == j:
self.assertAlmostEqual(chain[trans].prob, LogProb.from_float(0.6).prob, delta=1e-8)
elif j == i + 1:
self.assertAlmostEqual(chain[trans].prob, LogProb.from_float(0.4).prob, delta=1e-8)
else:
self.assertAlmostEqual(chain[trans].prob, ZERO, delta=1e-8)
def test_getitem(self):
chain = MarkovChain.from_probs(MarkovChainTest.LEFT_RIGHT)
self.assertAlmostEqual(chain[Transition(0,0)].prob, LogProb.from_float(0.6).prob, delta=1e-8)
self.assertAlmostEqual(chain[Transition(0,1)].prob, LogProb.from_float(0.4).prob, delta=1e-8)
self.assertAlmostEqual(chain[Transition(0,2)].prob, ZERO, delta=1e-8)
self.assertAlmostEqual(chain[Transition(0,3)].prob, ZERO, delta=1e-8)
def test_setitem(self):
chain = MarkovChain()
chain[Transition(0,0)] = LogProb.from_float(1.0)
chain[Transition(1,0)] += LogProb.from_float(2.0)
chain[Transition(1,0)] += LogProb.from_float(3.0)
self.assertAlmostEqual(chain[Transition(0,0)].prob, LogProb.from_float(1.0).prob, delta=1e-8)
self.assertAlmostEqual(chain[Transition(1,0)].prob, LogProb.from_float(5.0).prob, delta=1e-8)
def test_n_states(self):
chain = MarkovChain.from_probs(MarkovChainTest.LEFT_RIGHT)
self.assertEqual(chain.n_states, 3) | 44.8875 | 103 | 0.625731 | 494 | 3,591 | 4.453441 | 0.097166 | 0.02 | 0.130909 | 0.109091 | 0.923182 | 0.923182 | 0.923182 | 0.923182 | 0.849545 | 0.833636 | 0 | 0.055657 | 0.234475 | 3,591 | 80 | 104 | 44.8875 | 0.744634 | 0 | 0 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.279412 | 1 | 0.117647 | false | 0 | 0.044118 | 0 | 0.220588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d3e264dd1a7360dc24ebab6f0283d8075ff0d66a | 33 | py | Python | tests/mocks/__init__.py | PPierzc/Sappy | 5ea6a88f6185e85fe0ce04ca85d082c290d0ebdf | [
"MIT"
] | 1 | 2019-09-14T17:29:13.000Z | 2019-09-14T17:29:13.000Z | tests/mocks/__init__.py | PPierzc/Sappy | 5ea6a88f6185e85fe0ce04ca85d082c290d0ebdf | [
"MIT"
] | 2 | 2019-04-02T10:45:58.000Z | 2019-04-02T17:34:47.000Z | tests/mocks/__init__.py | PPierzc/Sappy | 5ea6a88f6185e85fe0ce04ca85d082c290d0ebdf | [
"MIT"
] | 3 | 2019-04-07T21:49:36.000Z | 2019-10-20T19:24:10.000Z | from .sine_wave import sine_wave
| 16.5 | 32 | 0.848485 | 6 | 33 | 4.333333 | 0.666667 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
31463978aa1a134839dc54b20baf8820f999cf9f | 33,796 | py | Python | tests/addition_tests.py | CrashAndSideburns/6502ieee754 | 1d1412a7d1b698b1c4870a4fd688051ecd7baf2e | [
"Unlicense"
] | 1 | 2021-12-03T00:01:29.000Z | 2021-12-03T00:01:29.000Z | tests/addition_tests.py | CrashAndSideburns/6502ieee754 | 1d1412a7d1b698b1c4870a4fd688051ecd7baf2e | [
"Unlicense"
] | null | null | null | tests/addition_tests.py | CrashAndSideburns/6502ieee754 | 1d1412a7d1b698b1c4870a4fd688051ecd7baf2e | [
"Unlicense"
] | null | null | null | import unittest
import numpy
import test_utils
class TestBasicAddition(unittest.TestCase):
# Test basic addition of all combinations of all types, not checking for any edge cases specifically.
ZERO = numpy.float32(0)
ONE = numpy.float32(1)
MIN_SUBNORM = numpy.float32(1e-45)
MAX_SUBNORM = numpy.float32(1.1754942e-38)
MIN_NORM = numpy.float32(1.1754944e-38)
MAX_NORM = numpy.float32(3.4028235e38)
INF = numpy.float32(numpy.inf)
NAN = numpy.float32(numpy.nan)
# Initialise the tester object used to run the assembled code.
@classmethod
def setUpClass(cls):
cls.tester = test_utils.SubroutineTester("test_addition.s")
cls.tester.initialise()
# Run a test to compare the expected sum of two floats to the actual sum.
def run_test(self, float1: numpy.float32, float2: numpy.float32):
expected = float1 + float2
if numpy.isnan(expected):
self.assertTrue(numpy.isnan(TestBasicAddition.tester.run_test(float1, float2)))
else:
self.assertEqual(float1 + float2,
TestBasicAddition.tester.run_test(float1, float2))
def test_zero(self):
# Test that ±0 + x = x for all types of x.
self.run_test(self.ZERO, self.ZERO)
self.run_test(self.ZERO, -self.ZERO)
self.run_test(-self.ZERO, self.ZERO)
self.run_test(-self.ZERO, -self.ZERO)
self.run_test(self.ZERO, self.ONE)
self.run_test(self.ZERO, -self.ONE)
self.run_test(-self.ZERO, self.ONE)
self.run_test(-self.ZERO, -self.ONE)
self.run_test(self.ZERO, self.MIN_SUBNORM)
self.run_test(self.ZERO, -self.MIN_SUBNORM)
self.run_test(-self.ZERO, self.MIN_SUBNORM)
self.run_test(-self.ZERO, -self.MIN_SUBNORM)
self.run_test(self.ZERO, numpy.float32(9.060464e-39))
self.run_test(self.ZERO, -numpy.float32(9.060464e-39))
self.run_test(-self.ZERO, numpy.float32(9.060464e-39))
self.run_test(-self.ZERO, -numpy.float32(9.060464e-39))
self.run_test(self.ZERO, self.MAX_SUBNORM)
self.run_test(self.ZERO, -self.MAX_SUBNORM)
self.run_test(-self.ZERO, self.MAX_SUBNORM)
self.run_test(-self.ZERO, -self.MAX_SUBNORM)
self.run_test(self.ZERO, self.MIN_NORM)
self.run_test(self.ZERO, -self.MIN_NORM)
self.run_test(-self.ZERO, self.MIN_NORM)
self.run_test(-self.ZERO, -self.MIN_NORM)
self.run_test(self.ZERO, numpy.float32(395.6166))
self.run_test(self.ZERO, -numpy.float32(395.6166))
self.run_test(-self.ZERO, numpy.float32(395.6166))
self.run_test(-self.ZERO, -numpy.float32(395.6166))
self.run_test(self.ZERO, self.MAX_NORM)
self.run_test(self.ZERO, -self.MAX_NORM)
self.run_test(-self.ZERO, self.MAX_NORM)
self.run_test(-self.ZERO, -self.MAX_NORM)
self.run_test(self.ZERO, self.INF)
self.run_test(self.ZERO, -self.INF)
self.run_test(-self.ZERO, self.INF)
self.run_test(-self.ZERO, -self.INF)
self.run_test(self.ZERO, self.NAN)
self.run_test(-self.ZERO, self.NAN)
def test_one(self):
# Test ±1 + x for all types of x.
self.run_test(self.ONE, self.ZERO)
self.run_test(self.ONE, -self.ZERO)
self.run_test(-self.ONE, self.ZERO)
self.run_test(-self.ONE, -self.ZERO)
self.run_test(self.ONE, self.ONE)
self.run_test(self.ONE, -self.ONE)
self.run_test(-self.ONE, self.ONE)
self.run_test(-self.ONE, -self.ONE)
self.run_test(self.ONE, self.MIN_SUBNORM)
self.run_test(self.ONE, -self.MIN_SUBNORM)
self.run_test(-self.ONE, self.MIN_SUBNORM)
self.run_test(-self.ONE, -self.MIN_SUBNORM)
self.run_test(self.ONE, numpy.float32(1.902965e-39))
self.run_test(self.ONE, -numpy.float32(1.902965e-39))
self.run_test(-self.ONE, numpy.float32(1.902965e-39))
self.run_test(-self.ONE, -numpy.float32(1.902965e-39))
self.run_test(self.ONE, self.MAX_SUBNORM)
self.run_test(self.ONE, -self.MAX_SUBNORM)
self.run_test(-self.ONE, self.MAX_SUBNORM)
self.run_test(-self.ONE, -self.MAX_SUBNORM)
self.run_test(self.ONE, self.MIN_NORM)
self.run_test(self.ONE, -self.MIN_NORM)
self.run_test(-self.ONE, self.MIN_NORM)
self.run_test(-self.ONE, -self.MIN_NORM)
self.run_test(self.ONE, numpy.float32(7918.158))
self.run_test(self.ONE, -numpy.float32(7918.158))
self.run_test(-self.ONE, numpy.float32(7918.158))
self.run_test(-self.ONE, -numpy.float32(7918.158))
self.run_test(self.ONE, self.MAX_NORM)
self.run_test(self.ONE, -self.MAX_NORM)
self.run_test(-self.ONE, self.MAX_NORM)
self.run_test(-self.ONE, -self.MAX_NORM)
self.run_test(self.ONE, self.INF)
self.run_test(self.ONE, -self.INF)
self.run_test(-self.ONE, self.INF)
self.run_test(-self.ONE, -self.INF)
self.run_test(self.ONE, self.NAN)
self.run_test(-self.ONE, self.NAN)
def test_min_subnorm(self):
# Test ±MIN_SUBNORM + x for all types of x.
self.run_test(self.MIN_SUBNORM, self.ZERO)
self.run_test(self.MIN_SUBNORM, -self.ZERO)
self.run_test(-self.MIN_SUBNORM, self.ZERO)
self.run_test(-self.MIN_SUBNORM, -self.ZERO)
self.run_test(self.MIN_SUBNORM, self.ONE)
self.run_test(self.MIN_SUBNORM, -self.ONE)
self.run_test(-self.MIN_SUBNORM, self.ONE)
self.run_test(-self.MIN_SUBNORM, -self.ONE)
self.run_test(self.MIN_SUBNORM, self.MIN_SUBNORM)
self.run_test(self.MIN_SUBNORM, -self.MIN_SUBNORM)
self.run_test(-self.MIN_SUBNORM, self.MIN_SUBNORM)
self.run_test(-self.MIN_SUBNORM, -self.MIN_SUBNORM)
self.run_test(self.MIN_SUBNORM, numpy.float32(6.927885e-39))
self.run_test(self.MIN_SUBNORM, -numpy.float32(6.927885e-39))
self.run_test(-self.MIN_SUBNORM, numpy.float32(6.927885e-39))
self.run_test(-self.MIN_SUBNORM, -numpy.float32(6.927885e-39))
self.run_test(self.MIN_SUBNORM, self.MAX_SUBNORM)
self.run_test(self.MIN_SUBNORM, -self.MAX_SUBNORM)
self.run_test(-self.MIN_SUBNORM, self.MAX_SUBNORM)
self.run_test(-self.MIN_SUBNORM, -self.MAX_SUBNORM)
self.run_test(self.MIN_SUBNORM, self.MIN_NORM)
self.run_test(self.MIN_SUBNORM, -self.MIN_NORM)
self.run_test(-self.MIN_SUBNORM, self.MIN_NORM)
self.run_test(-self.MIN_SUBNORM, -self.MIN_NORM)
self.run_test(self.MIN_SUBNORM, numpy.float32(466603.3))
self.run_test(self.MIN_SUBNORM, -numpy.float32(466603.3))
self.run_test(-self.MIN_SUBNORM, numpy.float32(466603.3))
self.run_test(-self.MIN_SUBNORM, -numpy.float32(466603.3))
self.run_test(self.MIN_SUBNORM, self.MAX_NORM)
self.run_test(self.MIN_SUBNORM, -self.MAX_NORM)
self.run_test(-self.MIN_SUBNORM, self.MAX_NORM)
self.run_test(-self.MIN_SUBNORM, -self.MAX_NORM)
self.run_test(self.MIN_SUBNORM, self.INF)
self.run_test(self.MIN_SUBNORM, -self.INF)
self.run_test(-self.MIN_SUBNORM, self.INF)
self.run_test(-self.MIN_SUBNORM, -self.INF)
self.run_test(self.MIN_SUBNORM, self.NAN)
self.run_test(-self.MIN_SUBNORM, self.NAN)
def test_subnorm(self):
# Test ±x + y for subnormal x and all types of y.
self.run_test(numpy.float32(7.518523e-39), self.ZERO)
self.run_test(numpy.float32(7.518523e-39), -self.ZERO)
self.run_test(-numpy.float32(7.518523e-39), self.ZERO)
self.run_test(-numpy.float32(7.518523e-39), -self.ZERO)
self.run_test(numpy.float32(2.028916e-39), self.ONE)
self.run_test(numpy.float32(2.028916e-39), -self.ONE)
self.run_test(-numpy.float32(2.028916e-39), self.ONE)
self.run_test(-numpy.float32(2.028916e-39), -self.ONE)
self.run_test(numpy.float32(4.042427e-39), self.MIN_SUBNORM)
self.run_test(numpy.float32(4.042427e-39), -self.MIN_SUBNORM)
self.run_test(-numpy.float32(4.042427e-39), self.MIN_SUBNORM)
self.run_test(-numpy.float32(4.042427e-39), -self.MIN_SUBNORM)
self.run_test(numpy.float32(9.636327e-39), numpy.float32(1.0185049e-38))
self.run_test(numpy.float32(9.636327e-39), -numpy.float32(1.0185049e-38))
self.run_test(-numpy.float32(9.636327e-39), numpy.float32(1.0185049e-38))
self.run_test(-numpy.float32(9.636327e-39), -numpy.float32(1.0185049e-38))
self.run_test(numpy.float32(1.989006e-39), self.MAX_SUBNORM)
self.run_test(numpy.float32(1.989006e-39), -self.MAX_SUBNORM)
self.run_test(-numpy.float32(1.989006e-39), self.MAX_SUBNORM)
self.run_test(-numpy.float32(1.989006e-39), -self.MAX_SUBNORM)
self.run_test(numpy.float32(2.952435e-39), self.MIN_NORM)
self.run_test(numpy.float32(2.952435e-39), -self.MIN_NORM)
self.run_test(-numpy.float32(2.952435e-39), self.MIN_NORM)
self.run_test(-numpy.float32(2.952435e-39), -self.MIN_NORM)
self.run_test(numpy.float32(1.154907e-38), numpy.float32(4.0687437e-36))
self.run_test(numpy.float32(1.154907e-38), -numpy.float32(4.0687437e-36))
self.run_test(-numpy.float32(1.154907e-38), numpy.float32(4.0687437e-36))
self.run_test(-numpy.float32(1.154907e-38), -numpy.float32(4.0687437e-36))
self.run_test(numpy.float32(9.79494e-39), self.MAX_NORM)
self.run_test(numpy.float32(9.79494e-39), -self.MAX_NORM)
self.run_test(-numpy.float32(9.79494e-39), self.MAX_NORM)
self.run_test(-numpy.float32(9.79494e-39), -self.MAX_NORM)
self.run_test(numpy.float32(1.54569e-39), self.INF)
self.run_test(numpy.float32(1.54569e-39), -self.INF)
self.run_test(-numpy.float32(1.54569e-39), self.INF)
self.run_test(-numpy.float32(1.54569e-39), -self.INF)
self.run_test(numpy.float32(3.974073e-39), self.NAN)
self.run_test(-numpy.float32(3.974073e-39), self.NAN)
def test_max_subnorm(self):
# Test ±MAX_SUBNORM + x for all types of x.
self.run_test(self.MAX_SUBNORM, self.ZERO)
self.run_test(self.MAX_SUBNORM, -self.ZERO)
self.run_test(-self.MAX_SUBNORM, self.ZERO)
self.run_test(-self.MAX_SUBNORM, -self.ZERO)
self.run_test(self.MAX_SUBNORM, self.ONE)
self.run_test(self.MAX_SUBNORM, -self.ONE)
self.run_test(-self.MAX_SUBNORM, self.ONE)
self.run_test(-self.MAX_SUBNORM, -self.ONE)
self.run_test(self.MAX_SUBNORM, self.MIN_SUBNORM)
self.run_test(self.MAX_SUBNORM, -self.MIN_SUBNORM)
self.run_test(-self.MAX_SUBNORM, self.MIN_SUBNORM)
self.run_test(-self.MAX_SUBNORM, -self.MIN_SUBNORM)
self.run_test(self.MAX_SUBNORM, numpy.float32(2.736488e-39))
self.run_test(self.MAX_SUBNORM, -numpy.float32(2.736488e-39))
self.run_test(-self.MAX_SUBNORM, numpy.float32(2.736488e-39))
self.run_test(-self.MAX_SUBNORM, -numpy.float32(2.736488e-39))
self.run_test(self.MAX_SUBNORM, self.MAX_SUBNORM)
self.run_test(self.MAX_SUBNORM, -self.MAX_SUBNORM)
self.run_test(-self.MAX_SUBNORM, self.MAX_SUBNORM)
self.run_test(-self.MAX_SUBNORM, -self.MAX_SUBNORM)
self.run_test(self.MAX_SUBNORM, self.MIN_NORM)
self.run_test(self.MAX_SUBNORM, -self.MIN_NORM)
self.run_test(-self.MAX_SUBNORM, self.MIN_NORM)
self.run_test(-self.MAX_SUBNORM, -self.MIN_NORM)
self.run_test(self.MAX_SUBNORM, numpy.float32(8.027242e-35))
self.run_test(self.MAX_SUBNORM, -numpy.float32(8.027242e-35))
self.run_test(-self.MAX_SUBNORM, numpy.float32(8.027242e-35))
self.run_test(-self.MAX_SUBNORM, -numpy.float32(8.027242e-35))
self.run_test(self.MAX_SUBNORM, self.MAX_NORM)
self.run_test(self.MAX_SUBNORM, -self.MAX_NORM)
self.run_test(-self.MAX_SUBNORM, self.MAX_NORM)
self.run_test(-self.MAX_SUBNORM, -self.MAX_NORM)
self.run_test(self.MAX_SUBNORM, self.INF)
self.run_test(self.MAX_SUBNORM, -self.INF)
self.run_test(-self.MAX_SUBNORM, self.INF)
self.run_test(-self.MAX_SUBNORM, -self.INF)
self.run_test(self.MAX_SUBNORM, self.NAN)
self.run_test(-self.MAX_SUBNORM, self.NAN)
def test_min_norm(self):
# Test ±MIN_NORM + x for all types of x.
self.run_test(self.MIN_NORM, self.ZERO)
self.run_test(self.MIN_NORM, -self.ZERO)
self.run_test(-self.MIN_NORM, self.ZERO)
self.run_test(-self.MIN_NORM, -self.ZERO)
self.run_test(self.MIN_NORM, self.ONE)
self.run_test(self.MIN_NORM, -self.ONE)
self.run_test(-self.MIN_NORM, self.ONE)
self.run_test(-self.MIN_NORM, -self.ONE)
self.run_test(self.MIN_NORM, self.MIN_SUBNORM)
self.run_test(self.MIN_NORM, -self.MIN_SUBNORM)
self.run_test(-self.MIN_NORM, self.MIN_SUBNORM)
self.run_test(-self.MIN_NORM, -self.MIN_SUBNORM)
self.run_test(self.MIN_NORM, numpy.float32(7.235862e-39))
self.run_test(self.MIN_NORM, -numpy.float32(7.235862e-39))
self.run_test(-self.MIN_NORM, numpy.float32(7.235862e-39))
self.run_test(-self.MIN_NORM, -numpy.float32(7.235862e-39))
self.run_test(self.MIN_NORM, self.MAX_SUBNORM)
self.run_test(self.MIN_NORM, -self.MAX_SUBNORM)
self.run_test(-self.MIN_NORM, self.MAX_SUBNORM)
self.run_test(-self.MIN_NORM, -self.MAX_SUBNORM)
self.run_test(self.MIN_NORM, self.MIN_NORM)
self.run_test(self.MIN_NORM, -self.MIN_NORM)
self.run_test(-self.MIN_NORM, self.MIN_NORM)
self.run_test(-self.MIN_NORM, -self.MIN_NORM)
self.run_test(self.MIN_NORM, numpy.float32(3.0655702e-37))
self.run_test(self.MIN_NORM, -numpy.float32(3.0655702e-37))
self.run_test(-self.MIN_NORM, numpy.float32(3.0655702e-37))
self.run_test(-self.MIN_NORM, -numpy.float32(3.0655702e-37))
self.run_test(self.MIN_NORM, self.MAX_NORM)
self.run_test(self.MIN_NORM, -self.MAX_NORM)
self.run_test(-self.MIN_NORM, self.MAX_NORM)
self.run_test(-self.MIN_NORM, -self.MAX_NORM)
self.run_test(self.MIN_NORM, self.INF)
self.run_test(self.MIN_NORM, -self.INF)
self.run_test(-self.MIN_NORM, self.INF)
self.run_test(-self.MIN_NORM, -self.INF)
self.run_test(self.MIN_NORM, self.NAN)
self.run_test(-self.MIN_NORM, self.NAN)
def test_norm(self):
# Test ±x + y for normal x and all types of y.
self.run_test(numpy.float32(3.2528998e8), self.ZERO)
self.run_test(numpy.float32(3.2528998e8), -self.ZERO)
self.run_test(-numpy.float32(3.2528998e8), self.ZERO)
self.run_test(-numpy.float32(3.2528998e8), -self.ZERO)
self.run_test(numpy.float32(5781.5137), self.ONE)
self.run_test(numpy.float32(5781.5137), -self.ONE)
self.run_test(-numpy.float32(5781.5137), self.ONE)
self.run_test(-numpy.float32(5781.5137), -self.ONE)
self.run_test(numpy.float32(4.0233208e-35), self.MIN_SUBNORM)
self.run_test(numpy.float32(4.0233208e-35), -self.MIN_SUBNORM)
self.run_test(-numpy.float32(4.0233208e-35), self.MIN_SUBNORM)
self.run_test(-numpy.float32(4.0233208e-35), -self.MIN_SUBNORM)
self.run_test(numpy.float32(3.4244755e-37), numpy.float32(7.951416e-39))
self.run_test(numpy.float32(3.4244755e-37), -numpy.float32(7.951416e-39))
self.run_test(-numpy.float32(3.4244755e-37), numpy.float32(7.951416e-39))
self.run_test(-numpy.float32(3.4244755e-37), -numpy.float32(7.951416e-39))
self.run_test(numpy.float32(1.772688e-35), self.MAX_SUBNORM)
self.run_test(numpy.float32(1.772688e-35), -self.MAX_SUBNORM)
self.run_test(-numpy.float32(1.772688e-35), self.MAX_SUBNORM)
self.run_test(-numpy.float32(1.772688e-35), -self.MAX_SUBNORM)
self.run_test(numpy.float32(9.7266296e-36), self.MIN_NORM)
self.run_test(numpy.float32(9.7266296e-36), -self.MIN_NORM)
self.run_test(-numpy.float32(9.7266296e-36), self.MIN_NORM)
self.run_test(-numpy.float32(9.7266296e-36), -self.MIN_NORM)
self.run_test(numpy.float32(9.964942e17), numpy.float32(3.0321312e16))
self.run_test(numpy.float32(9.964942e17), -numpy.float32(3.0321312e16))
self.run_test(-numpy.float32(9.964942e17), numpy.float32(3.0321312e16))
self.run_test(-numpy.float32(9.964942e17), -numpy.float32(3.0321312e16))
self.run_test(numpy.float32(3.3541464e35), self.MAX_NORM)
self.run_test(numpy.float32(3.3541464e35), -self.MAX_NORM)
self.run_test(-numpy.float32(3.3541464e35), self.MAX_NORM)
self.run_test(-numpy.float32(3.3541464e35), -self.MAX_NORM)
self.run_test(numpy.float32(1.8177568e25), self.INF)
self.run_test(numpy.float32(1.8177568e25), -self.INF)
self.run_test(-numpy.float32(1.8177568e25), self.INF)
self.run_test(-numpy.float32(1.8177568e25), -self.INF)
self.run_test(numpy.float32(2.2122593e-30), self.NAN)
self.run_test(-numpy.float32(2.2122593e-30), self.NAN)
def test_max_norm(self):
# Test ±MAX_NORM + x for all types of x.
self.run_test(self.MAX_NORM, self.ZERO)
self.run_test(self.MAX_NORM, -self.ZERO)
self.run_test(-self.MAX_NORM, self.ZERO)
self.run_test(-self.MAX_NORM, -self.ZERO)
self.run_test(self.MAX_NORM, self.ONE)
self.run_test(self.MAX_NORM, -self.ONE)
self.run_test(-self.MAX_NORM, self.ONE)
self.run_test(-self.MAX_NORM, -self.ONE)
self.run_test(self.MAX_NORM, self.MIN_SUBNORM)
self.run_test(self.MAX_NORM, -self.MIN_SUBNORM)
self.run_test(-self.MAX_NORM, self.MIN_SUBNORM)
self.run_test(-self.MAX_NORM, -self.MIN_SUBNORM)
self.run_test(self.MAX_NORM, numpy.float32(6.985955e-39))
self.run_test(self.MAX_NORM, -numpy.float32(6.985955e-39))
self.run_test(-self.MAX_NORM, numpy.float32(6.985955e-39))
self.run_test(-self.MAX_NORM, -numpy.float32(6.985955e-39))
self.run_test(self.MAX_NORM, self.MAX_SUBNORM)
self.run_test(self.MAX_NORM, -self.MAX_SUBNORM)
self.run_test(-self.MAX_NORM, self.MAX_SUBNORM)
self.run_test(-self.MAX_NORM, -self.MAX_SUBNORM)
self.run_test(self.MAX_NORM, self.MIN_NORM)
self.run_test(self.MAX_NORM, -self.MIN_NORM)
self.run_test(-self.MAX_NORM, self.MIN_NORM)
self.run_test(-self.MAX_NORM, -self.MIN_NORM)
self.run_test(self.MAX_NORM, numpy.float32(5.0028173e34))
self.run_test(self.MAX_NORM, -numpy.float32(5.0028173e34))
self.run_test(-self.MAX_NORM, numpy.float32(5.0028173e34))
self.run_test(-self.MAX_NORM, -numpy.float32(5.0028173e34))
self.run_test(self.MAX_NORM, self.MAX_NORM)
self.run_test(self.MAX_NORM, -self.MAX_NORM)
self.run_test(-self.MAX_NORM, self.MAX_NORM)
self.run_test(-self.MAX_NORM, -self.MAX_NORM)
self.run_test(self.MAX_NORM, self.INF)
self.run_test(self.MAX_NORM, -self.INF)
self.run_test(-self.MAX_NORM, self.INF)
self.run_test(-self.MAX_NORM, -self.INF)
self.run_test(self.MAX_NORM, self.NAN)
self.run_test(-self.MAX_NORM, self.NAN)
def test_infinity(self):
# Test ±∞ + x for all types of x.
self.run_test(self.INF, self.ZERO)
self.run_test(self.INF, -self.ZERO)
self.run_test(-self.INF, self.ZERO)
self.run_test(-self.INF, -self.ZERO)
self.run_test(self.INF, self.ONE)
self.run_test(self.INF, -self.ONE)
self.run_test(-self.INF, self.ONE)
self.run_test(-self.INF, -self.ONE)
self.run_test(self.INF, self.MIN_SUBNORM)
self.run_test(self.INF, -self.MIN_SUBNORM)
self.run_test(-self.INF, self.MIN_SUBNORM)
self.run_test(-self.INF, -self.MIN_SUBNORM)
self.run_test(self.INF, numpy.float32(5.804845e-39))
self.run_test(self.INF, -numpy.float32(5.804845e-39))
self.run_test(-self.INF, numpy.float32(5.804845e-39))
self.run_test(-self.INF, -numpy.float32(5.804845e-39))
self.run_test(self.INF, self.MAX_SUBNORM)
self.run_test(self.INF, -self.MAX_SUBNORM)
self.run_test(-self.INF, self.MAX_SUBNORM)
self.run_test(-self.INF, -self.MAX_SUBNORM)
self.run_test(self.INF, self.MIN_NORM)
self.run_test(self.INF, -self.MIN_NORM)
self.run_test(-self.INF, self.MIN_NORM)
self.run_test(-self.INF, -self.MIN_NORM)
self.run_test(self.INF, numpy.float32(2.0581173e8))
self.run_test(self.INF, -numpy.float32(2.0581173e8))
self.run_test(-self.INF, numpy.float32(2.0581173e8))
self.run_test(-self.INF, -numpy.float32(2.0581173e8))
self.run_test(self.INF, self.MAX_NORM)
self.run_test(self.INF, -self.MAX_NORM)
self.run_test(-self.INF, self.MAX_NORM)
self.run_test(-self.INF, -self.MAX_NORM)
self.run_test(self.INF, self.INF)
self.run_test(self.INF, -self.INF)
self.run_test(-self.INF, self.INF)
self.run_test(-self.INF, -self.INF)
self.run_test(self.INF, self.NAN)
self.run_test(-self.INF, self.NAN)
def test_nan(self):
# Test ±NaN + x for all types of x.
self.run_test(self.NAN, self.ZERO)
self.run_test(self.NAN, -self.ZERO)
self.run_test(self.NAN, self.ONE)
self.run_test(self.NAN, -self.ONE)
self.run_test(self.NAN, self.MIN_SUBNORM)
self.run_test(self.NAN, -self.MIN_SUBNORM)
self.run_test(self.NAN, numpy.float32(1.0764164e-38))
self.run_test(self.NAN, -numpy.float32(1.0764164e-38))
self.run_test(self.NAN, self.MAX_SUBNORM)
self.run_test(self.NAN, -self.MAX_SUBNORM)
self.run_test(self.NAN, self.MIN_NORM)
self.run_test(self.NAN, -self.MIN_NORM)
self.run_test(self.NAN, numpy.float32(2.0617456e23))
self.run_test(self.NAN, -numpy.float32(2.0617456e23))
self.run_test(self.NAN, self.MAX_NORM)
self.run_test(self.NAN, -self.MAX_NORM)
self.run_test(self.NAN, self.INF)
self.run_test(self.NAN, -self.INF)
self.run_test(self.NAN, self.NAN)
class TestNearBaseAddition(unittest.TestCase):
# Test addition of floats separated by a minimal Hamming distance.
ZERO = numpy.float32(0)
ONE = numpy.float32(1)
MIN_SUBNORM = numpy.float32(1e-45)
MAX_SUBNORM = numpy.float32(1.1754942e-38)
MIN_NORM = numpy.float32(1.1754944e-38)
MAX_NORM = numpy.float32(3.4028235e38)
# Initialise the tester object used to run the assembled code.
@classmethod
def setUpClass(cls):
cls.tester = test_utils.SubroutineTester("test_addition.s")
cls.tester.initialise()
# Run a test by flipping one bit of the second float's mantissa and testing that the sum is correct.
def run_test(self, float1: numpy.float32, float2: numpy.float32, flip_bit: int):
# The code to flip one bit of the second float is kind of a mess.
float2 = numpy.frombuffer(bytes(map(lambda a, b: a ^ b, numpy.float32.tobytes(float2), numpy.int32.tobytes(numpy.int32(1 << flip_bit)))), dtype=numpy.float32)[0]
# After flipping a bit, proceed as normal.
expected = float1 + float2
if numpy.isnan(expected):
self.assertTrue(numpy.isnan(TestBasicAddition.tester.run_test(float1, float2)))
else:
self.assertEqual(float1 + float2,
TestBasicAddition.tester.run_test(float1, float2))
def test_zero(self):
for flip_bit in range(23):
self.run_test(self.ZERO, self.ZERO, flip_bit)
self.run_test(self.ZERO, -self.ZERO, flip_bit)
self.run_test(-self.ZERO, self.ZERO, flip_bit)
self.run_test(-self.ZERO, -self.ZERO, flip_bit)
self.run_test(self.ZERO, self.ONE, flip_bit)
self.run_test(self.ZERO, -self.ONE, flip_bit)
self.run_test(-self.ZERO, self.ONE, flip_bit)
self.run_test(-self.ZERO, -self.ONE, flip_bit)
self.run_test(self.ZERO, self.MIN_SUBNORM, flip_bit)
self.run_test(self.ZERO, -self.MIN_SUBNORM, flip_bit)
self.run_test(-self.ZERO, self.MIN_SUBNORM, flip_bit)
self.run_test(-self.ZERO, -self.MIN_SUBNORM, flip_bit)
self.run_test(self.ZERO, self.MAX_SUBNORM, flip_bit)
self.run_test(self.ZERO, -self.MAX_SUBNORM, flip_bit)
self.run_test(-self.ZERO, self.MAX_SUBNORM, flip_bit)
self.run_test(-self.ZERO, -self.MAX_SUBNORM, flip_bit)
self.run_test(self.ZERO, self.MIN_NORM, flip_bit)
self.run_test(self.ZERO, -self.MIN_NORM, flip_bit)
self.run_test(-self.ZERO, self.MIN_NORM, flip_bit)
self.run_test(-self.ZERO, -self.MIN_NORM, flip_bit)
self.run_test(self.ZERO, self.MAX_NORM, flip_bit)
self.run_test(self.ZERO, -self.MAX_NORM, flip_bit)
self.run_test(-self.ZERO, self.MAX_NORM, flip_bit)
self.run_test(-self.ZERO, -self.MAX_NORM, flip_bit)
def test_one(self):
for flip_bit in range(23):
self.run_test(self.ONE, self.ZERO, flip_bit)
self.run_test(self.ONE, -self.ZERO, flip_bit)
self.run_test(-self.ONE, self.ZERO, flip_bit)
self.run_test(-self.ONE, -self.ZERO, flip_bit)
self.run_test(self.ONE, self.ONE, flip_bit)
self.run_test(self.ONE, -self.ONE, flip_bit)
self.run_test(-self.ONE, self.ONE, flip_bit)
self.run_test(-self.ONE, -self.ONE, flip_bit)
self.run_test(self.ONE, self.MIN_SUBNORM, flip_bit)
self.run_test(self.ONE, -self.MIN_SUBNORM, flip_bit)
self.run_test(-self.ONE, self.MIN_SUBNORM, flip_bit)
self.run_test(-self.ONE, -self.MIN_SUBNORM, flip_bit)
self.run_test(self.ONE, self.MAX_SUBNORM, flip_bit)
self.run_test(self.ONE, -self.MAX_SUBNORM, flip_bit)
self.run_test(-self.ONE, self.MAX_SUBNORM, flip_bit)
self.run_test(-self.ONE, -self.MAX_SUBNORM, flip_bit)
self.run_test(self.ONE, self.MIN_NORM, flip_bit)
self.run_test(self.ONE, -self.MIN_NORM, flip_bit)
self.run_test(-self.ONE, self.MIN_NORM, flip_bit)
self.run_test(-self.ONE, -self.MIN_NORM, flip_bit)
self.run_test(self.ONE, self.MAX_NORM, flip_bit)
self.run_test(self.ONE, -self.MAX_NORM, flip_bit)
self.run_test(-self.ONE, self.MAX_NORM, flip_bit)
self.run_test(-self.ONE, -self.MAX_NORM, flip_bit)
def test_min_subnorm(self):
for flip_bit in range(23):
self.run_test(self.MIN_SUBNORM, self.ZERO, flip_bit)
self.run_test(self.MIN_SUBNORM, -self.ZERO, flip_bit)
self.run_test(-self.MIN_SUBNORM, self.ZERO, flip_bit)
self.run_test(-self.MIN_SUBNORM, -self.ZERO, flip_bit)
self.run_test(self.MIN_SUBNORM, self.ONE, flip_bit)
self.run_test(self.MIN_SUBNORM, -self.ONE, flip_bit)
self.run_test(-self.MIN_SUBNORM, self.ONE, flip_bit)
self.run_test(-self.MIN_SUBNORM, -self.ONE, flip_bit)
self.run_test(self.MIN_SUBNORM, self.MIN_SUBNORM, flip_bit)
self.run_test(self.MIN_SUBNORM, -self.MIN_SUBNORM, flip_bit)
self.run_test(-self.MIN_SUBNORM, self.MIN_SUBNORM, flip_bit)
self.run_test(-self.MIN_SUBNORM, -self.MIN_SUBNORM, flip_bit)
self.run_test(self.MIN_SUBNORM, self.MAX_SUBNORM, flip_bit)
self.run_test(self.MIN_SUBNORM, -self.MAX_SUBNORM, flip_bit)
self.run_test(-self.MIN_SUBNORM, self.MAX_SUBNORM, flip_bit)
self.run_test(-self.MIN_SUBNORM, -self.MAX_SUBNORM, flip_bit)
self.run_test(self.MIN_SUBNORM, self.MIN_NORM, flip_bit)
self.run_test(self.MIN_SUBNORM, -self.MIN_NORM, flip_bit)
self.run_test(-self.MIN_SUBNORM, self.MIN_NORM, flip_bit)
self.run_test(-self.MIN_SUBNORM, -self.MIN_NORM, flip_bit)
self.run_test(self.MIN_SUBNORM, self.MAX_NORM, flip_bit)
self.run_test(self.MIN_SUBNORM, -self.MAX_NORM, flip_bit)
self.run_test(-self.MIN_SUBNORM, self.MAX_NORM, flip_bit)
self.run_test(-self.MIN_SUBNORM, -self.MAX_NORM, flip_bit)
def test_max_subnorm(self):
for flip_bit in range(23):
self.run_test(self.MAX_SUBNORM, self.ZERO, flip_bit)
self.run_test(self.MAX_SUBNORM, -self.ZERO, flip_bit)
self.run_test(-self.MAX_SUBNORM, self.ZERO, flip_bit)
self.run_test(-self.MAX_SUBNORM, -self.ZERO, flip_bit)
self.run_test(self.MAX_SUBNORM, self.ONE, flip_bit)
self.run_test(self.MAX_SUBNORM, -self.ONE, flip_bit)
self.run_test(-self.MAX_SUBNORM, self.ONE, flip_bit)
self.run_test(-self.MAX_SUBNORM, -self.ONE, flip_bit)
self.run_test(self.MAX_SUBNORM, self.MIN_SUBNORM, flip_bit)
self.run_test(self.MAX_SUBNORM, -self.MIN_SUBNORM, flip_bit)
self.run_test(-self.MAX_SUBNORM, self.MIN_SUBNORM, flip_bit)
self.run_test(-self.MAX_SUBNORM, -self.MIN_SUBNORM, flip_bit)
self.run_test(self.MAX_SUBNORM, self.MAX_SUBNORM, flip_bit)
self.run_test(self.MAX_SUBNORM, -self.MAX_SUBNORM, flip_bit)
self.run_test(-self.MAX_SUBNORM, self.MAX_SUBNORM, flip_bit)
self.run_test(-self.MAX_SUBNORM, -self.MAX_SUBNORM, flip_bit)
self.run_test(self.MAX_SUBNORM, self.MIN_NORM, flip_bit)
self.run_test(self.MAX_SUBNORM, -self.MIN_NORM, flip_bit)
self.run_test(-self.MAX_SUBNORM, self.MIN_NORM, flip_bit)
self.run_test(-self.MAX_SUBNORM, -self.MIN_NORM, flip_bit)
self.run_test(self.MAX_SUBNORM, self.MAX_NORM, flip_bit)
self.run_test(self.MAX_SUBNORM, -self.MAX_NORM, flip_bit)
self.run_test(-self.MAX_SUBNORM, self.MAX_NORM, flip_bit)
self.run_test(-self.MAX_SUBNORM, -self.MAX_NORM, flip_bit)
def test_min_norm(self):
for flip_bit in range(23):
self.run_test(self.MIN_NORM, self.ZERO, flip_bit)
self.run_test(self.MIN_NORM, -self.ZERO, flip_bit)
self.run_test(-self.MIN_NORM, self.ZERO, flip_bit)
self.run_test(-self.MIN_NORM, -self.ZERO, flip_bit)
self.run_test(self.MIN_NORM, self.ONE, flip_bit)
self.run_test(self.MIN_NORM, -self.ONE, flip_bit)
self.run_test(-self.MIN_NORM, self.ONE, flip_bit)
self.run_test(-self.MIN_NORM, -self.ONE, flip_bit)
self.run_test(self.MIN_NORM, self.MIN_SUBNORM, flip_bit)
self.run_test(self.MIN_NORM, -self.MIN_SUBNORM, flip_bit)
self.run_test(-self.MIN_NORM, self.MIN_SUBNORM, flip_bit)
self.run_test(-self.MIN_NORM, -self.MIN_SUBNORM, flip_bit)
self.run_test(self.MIN_NORM, self.MAX_SUBNORM, flip_bit)
self.run_test(self.MIN_NORM, -self.MAX_SUBNORM, flip_bit)
self.run_test(-self.MIN_NORM, self.MAX_SUBNORM, flip_bit)
self.run_test(-self.MIN_NORM, -self.MAX_SUBNORM, flip_bit)
self.run_test(self.MIN_NORM, self.MIN_NORM, flip_bit)
self.run_test(self.MIN_NORM, -self.MIN_NORM, flip_bit)
self.run_test(-self.MIN_NORM, self.MIN_NORM, flip_bit)
self.run_test(-self.MIN_NORM, -self.MIN_NORM, flip_bit)
self.run_test(self.MIN_NORM, self.MAX_NORM, flip_bit)
self.run_test(self.MIN_NORM, -self.MAX_NORM, flip_bit)
self.run_test(-self.MIN_NORM, self.MAX_NORM, flip_bit)
self.run_test(-self.MIN_NORM, -self.MAX_NORM, flip_bit)
def test_max_norm(self):
for flip_bit in range(23):
self.run_test(self.MAX_NORM, self.ZERO, flip_bit)
self.run_test(self.MAX_NORM, -self.ZERO, flip_bit)
self.run_test(-self.MAX_NORM, self.ZERO, flip_bit)
self.run_test(-self.MAX_NORM, -self.ZERO, flip_bit)
self.run_test(self.MAX_NORM, self.ONE, flip_bit)
self.run_test(self.MAX_NORM, -self.ONE, flip_bit)
self.run_test(-self.MAX_NORM, self.ONE, flip_bit)
self.run_test(-self.MAX_NORM, -self.ONE, flip_bit)
self.run_test(self.MAX_NORM, self.MIN_SUBNORM, flip_bit)
self.run_test(self.MAX_NORM, -self.MIN_SUBNORM, flip_bit)
self.run_test(-self.MAX_NORM, self.MIN_SUBNORM, flip_bit)
self.run_test(-self.MAX_NORM, -self.MIN_SUBNORM, flip_bit)
self.run_test(self.MAX_NORM, self.MAX_SUBNORM, flip_bit)
self.run_test(self.MAX_NORM, -self.MAX_SUBNORM, flip_bit)
self.run_test(-self.MAX_NORM, self.MAX_SUBNORM, flip_bit)
self.run_test(-self.MAX_NORM, -self.MAX_SUBNORM, flip_bit)
self.run_test(self.MAX_NORM, self.MIN_NORM, flip_bit)
self.run_test(self.MAX_NORM, -self.MIN_NORM, flip_bit)
self.run_test(-self.MAX_NORM, self.MIN_NORM, flip_bit)
self.run_test(-self.MAX_NORM, -self.MIN_NORM, flip_bit)
self.run_test(self.MAX_NORM, self.MAX_NORM, flip_bit)
self.run_test(self.MAX_NORM, -self.MAX_NORM, flip_bit)
self.run_test(-self.MAX_NORM, self.MAX_NORM, flip_bit)
self.run_test(-self.MAX_NORM, -self.MAX_NORM, flip_bit)
if __name__ == "__main__":
numpy.seterr(over="ignore", invalid="ignore")
unittest.main() | 45.918478 | 169 | 0.661794 | 5,280 | 33,796 | 4.00928 | 0.032386 | 0.168974 | 0.262412 | 0.303982 | 0.963484 | 0.958808 | 0.955406 | 0.951061 | 0.942841 | 0.942841 | 0 | 0.068052 | 0.202568 | 33,796 | 736 | 170 | 45.918478 | 0.717032 | 0.028287 | 0 | 0.087413 | 0 | 0 | 0.001523 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 1 | 0.034965 | false | 0 | 0.005245 | 0 | 0.068182 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
315060d3df01b973d4945a37815d724e3628f100 | 7,116 | py | Python | python/test/cambi_test.py | vibhoothi/vmaf | 99b69eb110bb9c314baa231ede8ba7789cf055ad | [
"BSD-2-Clause-Patent"
] | 2 | 2020-05-26T22:01:33.000Z | 2020-11-17T08:59:41.000Z | python/test/cambi_test.py | vibhoothi/vmaf | 99b69eb110bb9c314baa231ede8ba7789cf055ad | [
"BSD-2-Clause-Patent"
] | null | null | null | python/test/cambi_test.py | vibhoothi/vmaf | 99b69eb110bb9c314baa231ede8ba7789cf055ad | [
"BSD-2-Clause-Patent"
] | 1 | 2020-04-27T20:54:32.000Z | 2020-04-27T20:54:32.000Z | import unittest
from test.testutil import set_default_576_324_videos_for_testing, \
set_default_576_324_videos_for_testing_scaled, \
set_default_cambi_video_for_testing_b, \
set_default_cambi_video_for_testing_10b
from vmaf.core.cambi_feature_extractor import CambiFeatureExtractor
from vmaf.core.cambi_quality_runner import CambiQualityRunner
from vmaf.tools.misc import MyTestCase
class CambiFeatureExtractorTest(MyTestCase):
def tearDown(self):
if hasattr(self, 'fextractor'):
self.fextractor.remove_results()
super().tearDown()
def test_run_cambi_fextractor(self):
_, _, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = CambiFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_feature_cambi_score'],
0.6892500624999999, places=4)
self.assertAlmostEqual(results[1]['Cambi_feature_cambi_score'],
0.0014658541666666667, places=4)
def test_run_cambi_fextractor_scaled(self):
_, _, asset, asset_original = set_default_576_324_videos_for_testing_scaled()
self.fextractor = CambiFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None,
optional_dict={}
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_feature_cambi_score'],
0.9204257916666666, places=4)
self.assertAlmostEqual(results[1]['Cambi_feature_cambi_score'],
0.004251791666666667, places=4)
def test_run_cambi_fextractor_scaled_b(self):
_, _, asset, asset_original = set_default_cambi_video_for_testing_b()
self.fextractor = CambiFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None,
optional_dict={}
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_feature_cambi_score'],
1.218365, places=4)
def test_run_cambi_fextractor_10b(self):
_, _, asset, asset_original = set_default_cambi_video_for_testing_10b()
self.fextractor = CambiFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None,
optional_dict={}
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_feature_cambi_score'],
0.01451, places=4)
def test_run_cambi_fextractor_max_log_contrast(self):
_, _, asset, asset_original = set_default_576_324_videos_for_testing()
self.fextractor = CambiFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None,
optional_dict={'max_log_contrast': 4}
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_feature_cambi_score'],
0.9182153958333333, places=4)
self.assertAlmostEqual(results[1]['Cambi_feature_cambi_score'],
0.0024499791666667, places=4)
self.fextractor = CambiFeatureExtractor(
[asset, asset_original],
None, fifo_mode=False,
result_store=None,
optional_dict={'max_log_contrast': 0}
)
self.fextractor.run(parallelize=True)
results = self.fextractor.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_feature_cambi_score'],
0.015840666666666666, places=4)
self.assertAlmostEqual(results[1]['Cambi_feature_cambi_score'],
0.000671125, places=4)
class CambiQualityRunnerTest(MyTestCase):
def test_run_cambi_runner(self):
_, _, asset, asset_original = set_default_576_324_videos_for_testing()
self.qrunner = CambiQualityRunner(
[asset, asset_original],
None, fifo_mode=False,
result_store=None
)
self.qrunner.run(parallelize=True)
results = self.qrunner.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_score'],
0.6892500624999999, places=4)
self.assertAlmostEqual(results[1]['Cambi_score'],
0.0014658541666666667, places=4)
def test_run_cambi_runner_scale(self):
_, _, asset, asset_original = set_default_576_324_videos_for_testing_scaled()
self.qrunner = CambiQualityRunner(
[asset, asset_original],
None, fifo_mode=False,
result_store=None,
optional_dict={}
)
self.qrunner.run(parallelize=True)
results = self.qrunner.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_score'],
0.9204257916666666, places=4)
self.assertAlmostEqual(results[1]['Cambi_score'],
0.004251791666666667, places=4)
def test_run_cambi_runner_scale_b(self):
_, _, asset, asset_original = set_default_cambi_video_for_testing_b()
self.qrunner = CambiQualityRunner(
[asset, asset_original],
None, fifo_mode=False,
result_store=None,
optional_dict={}
)
self.qrunner.run(parallelize=True)
results = self.qrunner.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_score'],
1.218365, places=4)
def test_run_cambi_runner_10b(self):
_, _, asset, asset_original = set_default_cambi_video_for_testing_10b()
self.qrunner = CambiQualityRunner(
[asset, asset_original],
None, fifo_mode=False,
result_store=None,
optional_dict={}
)
self.qrunner.run(parallelize=True)
results = self.qrunner.results
# score: arithmetic mean score over all frames
self.assertAlmostEqual(results[0]['Cambi_score'],
0.01451, places=4)
if __name__ == '__main__':
unittest.main(verbosity=2) | 39.314917 | 85 | 0.629708 | 734 | 7,116 | 5.782016 | 0.113079 | 0.065975 | 0.080584 | 0.051838 | 0.891847 | 0.881008 | 0.877003 | 0.843073 | 0.821395 | 0.821395 | 0 | 0.063461 | 0.286959 | 7,116 | 181 | 86 | 39.314917 | 0.77296 | 0.063097 | 0 | 0.715278 | 0 | 0 | 0.05498 | 0.037554 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.069444 | false | 0 | 0.034722 | 0 | 0.118056 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
31c25b789b780523a978a951018e77f3d35474b9 | 94 | py | Python | delivery-tool/connection.py | tmessini/delivery-tool | 64f674aac7cdeb595cc8f4ffebeece8d4114afc4 | [
"Apache-2.0"
] | 1 | 2019-01-09T03:09:19.000Z | 2019-01-09T03:09:19.000Z | delivery-tool/connection.py | tmessini/delivery-tool | 64f674aac7cdeb595cc8f4ffebeece8d4114afc4 | [
"Apache-2.0"
] | null | null | null | delivery-tool/connection.py | tmessini/delivery-tool | 64f674aac7cdeb595cc8f4ffebeece8d4114afc4 | [
"Apache-2.0"
] | 1 | 2019-01-09T03:09:34.000Z | 2019-01-09T03:09:34.000Z | from docker import Client
def get():
return Client(base_url='unix://var/run/docker.sock') | 18.8 | 55 | 0.723404 | 15 | 94 | 4.466667 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 94 | 5 | 55 | 18.8 | 0.817073 | 0 | 0 | 0 | 0 | 0 | 0.273684 | 0.273684 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
31eb63a185eb3f785fedb85afd7061052539d31e | 10,138 | py | Python | test/snmp/test_process_snmp_data.py | polarG/splunk-connect-for-snmp | d1e85675edd5caa5bad9114d1611411e15cec063 | [
"Apache-2.0"
] | null | null | null | test/snmp/test_process_snmp_data.py | polarG/splunk-connect-for-snmp | d1e85675edd5caa5bad9114d1611411e15cec063 | [
"Apache-2.0"
] | null | null | null | test/snmp/test_process_snmp_data.py | polarG/splunk-connect-for-snmp | d1e85675edd5caa5bad9114d1611411e15cec063 | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from unittest.mock import Mock, patch
from splunk_connect_for_snmp.snmp.manager import Poller
class TestProcessSnmpData(TestCase):
@patch("splunk_connect_for_snmp.snmp.manager.isMIBResolved")
@patch("splunk_connect_for_snmp.snmp.manager.get_group_key")
@patch("splunk_connect_for_snmp.snmp.manager.map_metric_type")
@patch("splunk_connect_for_snmp.snmp.manager.extract_index_number")
@patch("time.time")
def test_multiple_metrics_single_group(
self,
m_time,
m_extract_index_number,
m_map_metric_type,
m_get_group_key,
m_resolved,
):
poller = Poller.__new__(Poller)
m_resolved.return_value = True
m_get_group_key.return_value = "QWERTYUIOP"
m_map_metric_type.side_effect = ["g", "g"]
m_extract_index_number.return_value = 1
m_time.return_value = 1640609779.473053
var_bind_mock1_1 = Mock()
var_bind_mock1_2 = Mock()
var_bind_mock2_1 = Mock()
var_bind_mock2_2 = Mock()
var_bind_mock1_1.getMibSymbol.return_value = "IF-MIB", "some_metric", 1
var_bind_mock1_1.prettyPrint.return_value = "some text"
var_bind_mock1_1.getOid.return_value = "1.2.3.4.5.6.7"
var_bind_mock1_2.prettyPrint.return_value = 65
var_bind_mock2_1.getMibSymbol.return_value = "UDP-MIB", "next_metric", 1
var_bind_mock2_1.prettyPrint.return_value = "some text2"
var_bind_mock2_1.getOid.return_value = "9.8.7.6"
var_bind_mock2_2.prettyPrint.return_value = 123
varBindTable = [
(var_bind_mock1_1, var_bind_mock1_2),
(var_bind_mock2_1, var_bind_mock2_2),
]
metrics = {}
mapping = {}
poller.process_snmp_data(varBindTable, metrics, mapping)
self.assertEqual(
{
"QWERTYUIOP": {
"fields": {},
"metrics": {
"IF-MIB.some_metric": {
"oid": "1.2.3.4.5.6.7",
"time": 1640609779.473053,
"type": "g",
"value": 65.0,
},
"UDP-MIB.next_metric": {
"oid": "9.8.7.6",
"time": 1640609779.473053,
"type": "g",
"value": 123.0,
},
},
}
},
metrics,
)
@patch("splunk_connect_for_snmp.snmp.manager.isMIBResolved")
@patch("splunk_connect_for_snmp.snmp.manager.get_group_key")
@patch("splunk_connect_for_snmp.snmp.manager.map_metric_type")
@patch("splunk_connect_for_snmp.snmp.manager.extract_index_number")
@patch("time.time")
def test_multiple_metrics_multiple_groups(
self,
m_time,
m_extract_index_number,
m_map_metric_type,
m_get_group_key,
m_resolved,
):
poller = Poller.__new__(Poller)
m_resolved.return_value = True
m_get_group_key.side_effect = ["GROUP1", "GROUP2"]
m_map_metric_type.side_effect = ["g", "g"]
m_extract_index_number.return_value = 1
m_time.return_value = 1640609779.473053
var_bind_mock1_1 = Mock()
var_bind_mock1_2 = Mock()
var_bind_mock2_1 = Mock()
var_bind_mock2_2 = Mock()
var_bind_mock1_1.getMibSymbol.return_value = "IF-MIB", "some_metric", 1
var_bind_mock1_1.prettyPrint.return_value = "some text"
var_bind_mock1_1.getOid.return_value = "1.2.3.4.5.6.7"
var_bind_mock1_2.prettyPrint.return_value = 65
var_bind_mock2_1.getMibSymbol.return_value = "UDP-MIB", "next_metric", 1
var_bind_mock2_1.prettyPrint.return_value = "some text2"
var_bind_mock2_1.getOid.return_value = "9.8.7.6"
var_bind_mock2_2.prettyPrint.return_value = 123
varBindTable = [
(var_bind_mock1_1, var_bind_mock1_2),
(var_bind_mock2_1, var_bind_mock2_2),
]
metrics = {}
mapping = {}
poller.process_snmp_data(varBindTable, metrics, mapping)
self.assertEqual(
{
"GROUP1": {
"fields": {},
"metrics": {
"IF-MIB.some_metric": {
"oid": "1.2.3.4.5.6.7",
"time": 1640609779.473053,
"type": "g",
"value": 65.0,
}
},
},
"GROUP2": {
"fields": {},
"metrics": {
"UDP-MIB.next_metric": {
"oid": "9.8.7.6",
"time": 1640609779.473053,
"type": "g",
"value": 123.0,
}
},
},
},
metrics,
)
@patch("splunk_connect_for_snmp.snmp.manager.isMIBResolved")
@patch("splunk_connect_for_snmp.snmp.manager.get_group_key")
@patch("splunk_connect_for_snmp.snmp.manager.map_metric_type")
@patch("splunk_connect_for_snmp.snmp.manager.extract_index_number")
@patch("time.time")
def test_metrics_and_fields(
self,
m_time,
m_extract_index_number,
m_map_metric_type,
m_get_group_key,
m_resolved,
):
poller = Poller.__new__(Poller)
m_resolved.return_value = True
m_get_group_key.return_value = "GROUP1"
m_map_metric_type.side_effect = ["g", "r"]
m_extract_index_number.return_value = 1
m_time.return_value = 1640609779.473053
var_bind_mock1_1 = Mock()
var_bind_mock1_2 = Mock()
var_bind_mock2_1 = Mock()
var_bind_mock2_2 = Mock()
var_bind_mock1_1.getMibSymbol.return_value = "IF-MIB", "some_metric", 1
var_bind_mock1_1.prettyPrint.return_value = "some text"
var_bind_mock1_1.getOid.return_value = "1.2.3.4.5.6.7"
var_bind_mock1_2.prettyPrint.return_value = 65
var_bind_mock2_1.getMibSymbol.return_value = "UDP-MIB", "some_field", 1
var_bind_mock2_1.prettyPrint.return_value = "some text2"
var_bind_mock2_1.getOid.return_value = "9.8.7.6"
var_bind_mock2_2.prettyPrint.return_value = "up and running"
varBindTable = [
(var_bind_mock1_1, var_bind_mock1_2),
(var_bind_mock2_1, var_bind_mock2_2),
]
metrics = {}
mapping = {}
poller.process_snmp_data(varBindTable, metrics, mapping)
self.assertEqual(
{
"GROUP1": {
"fields": {
"UDP-MIB.some_field": {
"oid": "9.8.7.6",
"time": 1640609779.473053,
"type": "r",
"value": "up and running",
}
},
"metrics": {
"IF-MIB.some_metric": {
"oid": "1.2.3.4.5.6.7",
"time": 1640609779.473053,
"type": "g",
"value": 65.0,
}
},
}
},
metrics,
)
@patch("splunk_connect_for_snmp.snmp.manager.isMIBResolved")
@patch("splunk_connect_for_snmp.snmp.manager.get_group_key")
@patch("splunk_connect_for_snmp.snmp.manager.map_metric_type")
@patch("splunk_connect_for_snmp.snmp.manager.extract_index_number")
@patch("time.time")
def test_metrics_with_profile(
self,
m_time,
m_extract_index_number,
m_map_metric_type,
m_get_group_key,
m_resolved,
):
poller = Poller.__new__(Poller)
m_resolved.return_value = True
m_get_group_key.return_value = "QWERTYUIOP"
m_map_metric_type.side_effect = ["g", "g"]
m_extract_index_number.return_value = 1
m_time.return_value = 1640609779.473053
var_bind_mock1_1 = Mock()
var_bind_mock1_2 = Mock()
var_bind_mock2_1 = Mock()
var_bind_mock2_2 = Mock()
var_bind_mock1_1.getMibSymbol.return_value = "IF-MIB", "some_metric", 1
var_bind_mock1_1.prettyPrint.return_value = "some text"
var_bind_mock1_1.getOid.return_value = "1.2.3.4.5.6.7"
var_bind_mock1_2.prettyPrint.return_value = 65
var_bind_mock2_1.getMibSymbol.return_value = "UDP-MIB", "next_metric", 1
var_bind_mock2_1.prettyPrint.return_value = "some text2"
var_bind_mock2_1.getOid.return_value = "9.8.7.6"
var_bind_mock2_2.prettyPrint.return_value = 123
varBindTable = [
(var_bind_mock1_1, var_bind_mock1_2),
(var_bind_mock2_1, var_bind_mock2_2),
]
metrics = {}
mapping = {"IF-MIB:some_metric": "profile1", "UDP-MIB:next_metric": "profile2"}
poller.process_snmp_data(varBindTable, metrics, "some_target", mapping)
self.assertEqual(
{
"QWERTYUIOP": {
"fields": {},
"metrics": {
"IF-MIB.some_metric": {
"oid": "1.2.3.4.5.6.7",
"time": 1640609779.473053,
"type": "g",
"value": 65.0,
},
"UDP-MIB.next_metric": {
"oid": "9.8.7.6",
"time": 1640609779.473053,
"type": "g",
"value": 123.0,
},
},
"profiles": ["profile1", "profile2"],
}
},
metrics,
)
| 34.020134 | 87 | 0.528013 | 1,134 | 10,138 | 4.311287 | 0.079365 | 0.091634 | 0.078544 | 0.053181 | 0.930456 | 0.930456 | 0.915934 | 0.91082 | 0.91082 | 0.904479 | 0 | 0.073941 | 0.366344 | 10,138 | 297 | 88 | 34.13468 | 0.687111 | 0 | 0 | 0.776892 | 0 | 0 | 0.177057 | 0.082462 | 0 | 0 | 0 | 0 | 0.015936 | 1 | 0.015936 | false | 0 | 0.011952 | 0 | 0.031873 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
31eeac3c74aeaf006e12112b8b6b63c26c71f54e | 2,417 | py | Python | test-framework/test-suites/integration/tests/list/test_list_switch_host.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 123 | 2015-05-12T23:36:45.000Z | 2017-07-05T23:26:57.000Z | test-framework/test-suites/integration/tests/list/test_list_switch_host.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 177 | 2015-06-05T19:17:47.000Z | 2017-07-07T17:57:24.000Z | test-framework/test-suites/integration/tests/list/test_list_switch_host.py | knutsonchris/stacki | 33087dd5fa311984a66ccecfeee6f9c2c25f665d | [
"BSD-3-Clause"
] | 32 | 2015-06-07T02:25:03.000Z | 2017-06-23T07:35:35.000Z | import json
class TestListSwitchHost:
def test_no_args(self, host, add_host, add_switch):
# Add interfaces to our switch and backend
result = host.run('stack add host interface switch-0-0 interface=eth0 network=private')
assert result.rc == 0
result = host.run('stack add host interface backend-0-0 interface=eth0 network=private')
assert result.rc == 0
# Add our backend to the test switch
result = host.run('stack add switch host switch-0-0 host=backend-0-0 interface=eth0 port=1')
assert result.rc == 0
# List the switch hosts
result = host.run('stack list switch host output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [{
'host': 'backend-0-0',
'interface': 'eth0',
'mac': None,
'port': 1,
'switch': 'switch-0-0',
'vlan': None
}]
def test_one_arg(self, host, add_host, add_switch):
# Add interfaces to our switch and backend
result = host.run('stack add host interface switch-0-0 interface=eth0 network=private')
assert result.rc == 0
result = host.run('stack add host interface backend-0-0 interface=eth0 network=private')
assert result.rc == 0
# Add our backend to the test switch
result = host.run('stack add switch host switch-0-0 host=backend-0-0 interface=eth0 port=1')
assert result.rc == 0
# List the switch hosts
result = host.run('stack list switch host switch-0-0 output-format=json')
assert result.rc == 0
assert json.loads(result.stdout) == [{
'host': 'backend-0-0',
'interface': 'eth0',
'mac': None,
'port': 1,
'switch': 'switch-0-0',
'vlan': None
}]
def test_skip_interface(self, host, add_host, add_switch):
# Add interfaces to our switch and backend
result = host.run('stack add host interface switch-0-0 interface=eth0 network=private')
assert result.rc == 0
result = host.run('stack add host interface backend-0-0 interface=eth0 network=private')
assert result.rc == 0
# Add our backend to the test switch
result = host.run('stack add switch host switch-0-0 host=backend-0-0 interface=eth0 port=1')
assert result.rc == 0
# Now remove the backend interface, which should now get skipped
result = host.run('stack remove host interface backend-0-0 interface=eth0')
assert result.rc == 0
# List the switch hosts
result = host.run('stack list switch host switch-0-0 output-format=json')
assert result.rc == 0
assert result.stdout == ''
| 33.109589 | 94 | 0.698386 | 381 | 2,417 | 4.39895 | 0.125984 | 0.022673 | 0.100835 | 0.139618 | 0.908711 | 0.908711 | 0.908711 | 0.887828 | 0.887828 | 0.887828 | 0 | 0.034274 | 0.179148 | 2,417 | 72 | 95 | 33.569444 | 0.810484 | 0.14729 | 0 | 0.833333 | 0 | 0.0625 | 0.449268 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.0625 | false | 0 | 0.020833 | 0 | 0.104167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9ecfb6ce02834becf36bbd15c3c256a3fa33b5db | 2,912 | py | Python | src/dataset.py | Sriharsha-hatwar/Tweet-Sentiment-Span-Extraction | 178aa3b9c42215f446ee42c5b544f1698dfbc932 | [
"BSD-3-Clause"
] | null | null | null | src/dataset.py | Sriharsha-hatwar/Tweet-Sentiment-Span-Extraction | 178aa3b9c42215f446ee42c5b544f1698dfbc932 | [
"BSD-3-Clause"
] | 1 | 2020-06-19T18:21:43.000Z | 2020-06-19T18:21:44.000Z | src/dataset.py | Sriharsha-hatwar/Tweet-Sentiment-Span-Extraction | 178aa3b9c42215f446ee42c5b544f1698dfbc932 | [
"BSD-3-Clause"
] | null | null | null | import os
import utils
import torch
import logging
from config import BERTConfig, RoBERTaConfig
from torch.utils import data
class TweetDataSetForBert(data.Dataset):
def __init__(self, tweet_texts, selected_texts, sentiments, preprocess_texts):
self.tweet_texts = tweet_texts
self.selected_texts = selected_texts
self.sentiments = sentiments
self.tokenizer = BERTConfig.TOKENIZER
self.max_len = BERTConfig.MAX_LEN
self.preprocess_texts = preprocess_texts
def __len__(self):
return len(self.tweet_texts)
def __getitem__(self, item):
data = utils.preprocess_bert(
self.tweet_texts[item],
self.selected_texts[item],
self.sentiments[item],
self.tokenizer,
self.max_len
)
# Write the preprocessing step here.
return {
'ids' : torch.tensor(data['ids'], dtype=torch.long),
'mask' : torch.tensor(data['mask'], dtype=torch.long),
'token_type_ids' : torch.tensor(data['token_type_ids'], dtype=torch.long),
'tweet_offsets' : torch.tensor(data['tweet_offsets'], dtype=torch.long),
'target_start' : torch.tensor(data['target_start'], dtype=torch.long),
'target_end' : torch.tensor(data['target_end'], dtype=torch.long),
'sentiment' : data['sentiment'],
'orig_tweet' : data['orig_tweet'],
'orig_selected' : data['orig_selected']
}
class TweetDataSetForRoBERTa(data.Dataset):
def __init__(self, tweet_texts, selected_texts, sentiments, preprocess_texts):
self.tweet_texts = tweet_texts
self.selected_texts = selected_texts
self.sentiments = sentiments
self.tokenizer = RoBERTaConfig.TOKENIZER
self.max_len = RoBERTaConfig.MAX_LEN
self.preprocess_texts = preprocess_texts
def __len__(self):
return len(self.tweet_texts)
def __getitem__(self, item):
data = utils.preprocess_roberta(
self.tweet_texts[item],
self.selected_texts[item],
self.sentiments[item],
self.tokenizer,
self.max_len
)
return {
'ids' : torch.tensor(data['ids'], dtype=torch.long),
'mask' : torch.tensor(data['mask'], dtype=torch.long),
'token_type_ids' : torch.tensor(data['token_type_ids'], dtype=torch.long),
'tweet_offsets' : torch.tensor(data['tweet_offsets'], dtype=torch.long),
'target_start' : torch.tensor(data['target_start'], dtype=torch.long),
'target_end' : torch.tensor(data['target_end'], dtype=torch.long),
'sentiment' : data['sentiment'],
'orig_tweet' : data['orig_tweet'],
'orig_selected' : data['orig_selected']
}
| 37.818182 | 86 | 0.609547 | 315 | 2,912 | 5.368254 | 0.152381 | 0.07806 | 0.106446 | 0.044944 | 0.833826 | 0.833826 | 0.833826 | 0.833826 | 0.833826 | 0.833826 | 0 | 0 | 0.277129 | 2,912 | 76 | 87 | 38.315789 | 0.803325 | 0.011676 | 0 | 0.71875 | 0 | 0 | 0.122435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.09375 | 0.03125 | 0.28125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7349ac0505302b5e65b1e8757b3a7882d1c376b4 | 80 | py | Python | mas_tools/api/__init__.py | tau-lex/market-analysis-system | 19926ca92bcab31a51fdac8f1910c8775d3d38d1 | [
"MIT"
] | 11 | 2018-03-08T14:23:45.000Z | 2020-08-14T03:59:43.000Z | mas_tools/api/__init__.py | terentjew-alexey/market-analysis-system | 19926ca92bcab31a51fdac8f1910c8775d3d38d1 | [
"MIT"
] | null | null | null | mas_tools/api/__init__.py | terentjew-alexey/market-analysis-system | 19926ca92bcab31a51fdac8f1910c8775d3d38d1 | [
"MIT"
] | 7 | 2017-06-01T01:28:06.000Z | 2020-07-10T22:24:42.000Z | from mas_tools.api.binance import Binance
# from mas_tools.api.exmo import Exmo
| 26.666667 | 41 | 0.825 | 14 | 80 | 4.571429 | 0.5 | 0.21875 | 0.375 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1125 | 80 | 2 | 42 | 40 | 0.901408 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
b405c8b01efe8a9e5b06e71cb4d89063a4536eca | 1,380 | py | Python | mercury_ml/tensorflow/prediction.py | mercury-ml-team/mercury-ml | 8d27816490f0be46f871e889e4635e9223b7044c | [
"MIT"
] | 43 | 2019-02-01T15:22:09.000Z | 2020-02-21T12:51:42.000Z | mercury_ml/tensorflow/prediction.py | mercury-ml-team/mercury-ml | 8d27816490f0be46f871e889e4635e9223b7044c | [
"MIT"
] | 17 | 2019-02-15T12:52:18.000Z | 2019-05-09T15:42:51.000Z | mercury_ml/tensorflow/prediction.py | mercury-ml-team/mercury-ml | 8d27816490f0be46f871e889e4635e9223b7044c | [
"MIT"
] | 12 | 2019-02-02T16:48:10.000Z | 2019-12-16T15:40:15.000Z | def predict(data_set, model, **kwargs):
"""
Produces predictions with a trained Keras model where inputs are arrays
:param DataSet data_set: A DataSet with a NumpyDataWrapper called "features"
:param model: A (fitted) Keras model
:param kwargs: Additional parameters to be passed to model.predict
:return: A NumpyDataWrapper with an array of predictions as its underlying
"""
prediction_array = model.predict(x=data_set.features.underlying, **kwargs)
from mercury_ml.common.data_wrappers.numpy import NumpyDataWrapper
return NumpyDataWrapper(underlying=prediction_array, field_names=data_set.targets.field_names)
def predict_generator(data_set, model, **kwargs):
"""
Produces predictions with a trained Keras model where inputs are generators
:param DataSet data_set: A DataSet with a KerasIteratorFeaturesDataWrapper called "features"
:param model: A (fitted) Keras model
:param kwargs: Additional parameters to be passed to model.predict
:return: A NumpyDataWrapper with an array of predictions as its underlying
"""
prediction_array = model.predict_generator(generator=data_set.features.underlying, **kwargs)
from mercury_ml.common.data_wrappers.numpy import NumpyDataWrapper
return NumpyDataWrapper(underlying=prediction_array, field_names=data_set.features.underlying.get_labels_dummies()) | 49.285714 | 119 | 0.776812 | 176 | 1,380 | 5.960227 | 0.295455 | 0.053384 | 0.095329 | 0.071497 | 0.844614 | 0.844614 | 0.844614 | 0.844614 | 0.783603 | 0.783603 | 0 | 0 | 0.154348 | 1,380 | 28 | 119 | 49.285714 | 0.898886 | 0.49058 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
b42e01b497eed8a4616e56359654df96530b2429 | 37,311 | py | Python | feersum_nlu/api/regex_entity_extractors_api.py | praekelt/feersum-nlu-api-wrappers | 6580e2bab2c8a764fe868a505330b3fee6029074 | [
"BSD-3-Clause"
] | 9 | 2017-10-10T12:24:23.000Z | 2021-08-18T14:07:51.000Z | feersum_nlu/api/regex_entity_extractors_api.py | praekelt/feersum-nlu-api-wrappers | 6580e2bab2c8a764fe868a505330b3fee6029074 | [
"BSD-3-Clause"
] | 1 | 2020-12-06T11:03:25.000Z | 2021-04-14T05:21:23.000Z | feersum_nlu/api/regex_entity_extractors_api.py | praekelt/feersum-nlu-api-wrappers | 6580e2bab2c8a764fe868a505330b3fee6029074 | [
"BSD-3-Clause"
] | 2 | 2019-02-12T08:26:06.000Z | 2022-02-01T09:39:47.000Z | # coding: utf-8
"""
FeersumNLU API
This is the HTTP API for Feersum NLU. See https://github.com/praekelt/feersum-nlu-api-wrappers for examples of how to use the API. # noqa: E501
OpenAPI spec version: 2.0.54.dev2
Contact: nlu@feersum.io
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from feersum_nlu.api_client import ApiClient
class RegexEntityExtractorsApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def regex_entity_extractor_create(self, create_details, **kwargs): # noqa: E501
"""Create a regular expression entity extractor. # noqa: E501
Create a new regular expression entity extractor or reload one from the trash. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_create(create_details, async_req=True)
>>> result = thread.get()
:param async_req bool
:param RegexEntityExtractorCreateDetails create_details: The details of the instance to create. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.regex_entity_extractor_create_with_http_info(create_details, **kwargs) # noqa: E501
else:
(data) = self.regex_entity_extractor_create_with_http_info(create_details, **kwargs) # noqa: E501
return data
def regex_entity_extractor_create_with_http_info(self, create_details, **kwargs): # noqa: E501
"""Create a regular expression entity extractor. # noqa: E501
Create a new regular expression entity extractor or reload one from the trash. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_create_with_http_info(create_details, async_req=True)
>>> result = thread.get()
:param async_req bool
:param RegexEntityExtractorCreateDetails create_details: The details of the instance to create. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['create_details', 'x_caller'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method regex_entity_extractor_create" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'create_details' is set
if ('create_details' not in params or
params['create_details'] is None):
raise ValueError("Missing the required parameter `create_details` when calling `regex_entity_extractor_create`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
if 'x_caller' in params:
header_params['X-CALLER'] = params['x_caller'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'create_details' in params:
body_params = params['create_details']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyHeader_old'] # noqa: E501
return self.api_client.call_api(
'/nlu/v2/regex_entity_extractors', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RegexEntityExtractorInstanceDetail', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def regex_entity_extractor_del(self, instance_name, **kwargs): # noqa: E501
"""Delete named instance. # noqa: E501
Delete and get the details of the named regular expression entity extractor instance. Deleted models can be reloaded from the trash with the create operation. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_del(instance_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.regex_entity_extractor_del_with_http_info(instance_name, **kwargs) # noqa: E501
else:
(data) = self.regex_entity_extractor_del_with_http_info(instance_name, **kwargs) # noqa: E501
return data
def regex_entity_extractor_del_with_http_info(self, instance_name, **kwargs): # noqa: E501
"""Delete named instance. # noqa: E501
Delete and get the details of the named regular expression entity extractor instance. Deleted models can be reloaded from the trash with the create operation. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_del_with_http_info(instance_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['instance_name', 'x_caller'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method regex_entity_extractor_del" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'instance_name' is set
if ('instance_name' not in params or
params['instance_name'] is None):
raise ValueError("Missing the required parameter `instance_name` when calling `regex_entity_extractor_del`") # noqa: E501
collection_formats = {}
path_params = {}
if 'instance_name' in params:
path_params['instance_name'] = params['instance_name'] # noqa: E501
query_params = []
header_params = {}
if 'x_caller' in params:
header_params['X-CALLER'] = params['x_caller'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyHeader_old'] # noqa: E501
return self.api_client.call_api(
'/nlu/v2/regex_entity_extractors/{instance_name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RegexEntityExtractorInstanceDetail', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def regex_entity_extractor_get_details(self, instance_name, **kwargs): # noqa: E501
"""Get details of named instance. # noqa: E501
Get the details of the named regular expression entity extractor instance. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_get_details(instance_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.regex_entity_extractor_get_details_with_http_info(instance_name, **kwargs) # noqa: E501
else:
(data) = self.regex_entity_extractor_get_details_with_http_info(instance_name, **kwargs) # noqa: E501
return data
def regex_entity_extractor_get_details_with_http_info(self, instance_name, **kwargs): # noqa: E501
"""Get details of named instance. # noqa: E501
Get the details of the named regular expression entity extractor instance. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_get_details_with_http_info(instance_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['instance_name', 'x_caller'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method regex_entity_extractor_get_details" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'instance_name' is set
if ('instance_name' not in params or
params['instance_name'] is None):
raise ValueError("Missing the required parameter `instance_name` when calling `regex_entity_extractor_get_details`") # noqa: E501
collection_formats = {}
path_params = {}
if 'instance_name' in params:
path_params['instance_name'] = params['instance_name'] # noqa: E501
query_params = []
header_params = {}
if 'x_caller' in params:
header_params['X-CALLER'] = params['x_caller'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyHeader_old'] # noqa: E501
return self.api_client.call_api(
'/nlu/v2/regex_entity_extractors/{instance_name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RegexEntityExtractorInstanceDetail', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def regex_entity_extractor_get_details_all(self, **kwargs): # noqa: E501
"""Get list of regular expression entity extractors. # noqa: E501
Get the list of regular expression entity extractors. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_get_details_all(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str x_caller:
:return: list[RegexEntityExtractorInstanceDetail]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.regex_entity_extractor_get_details_all_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.regex_entity_extractor_get_details_all_with_http_info(**kwargs) # noqa: E501
return data
def regex_entity_extractor_get_details_all_with_http_info(self, **kwargs): # noqa: E501
"""Get list of regular expression entity extractors. # noqa: E501
Get the list of regular expression entity extractors. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_get_details_all_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str x_caller:
:return: list[RegexEntityExtractorInstanceDetail]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['x_caller'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method regex_entity_extractor_get_details_all" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
if 'x_caller' in params:
header_params['X-CALLER'] = params['x_caller'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyHeader_old'] # noqa: E501
return self.api_client.call_api(
'/nlu/v2/regex_entity_extractors', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[RegexEntityExtractorInstanceDetail]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def regex_entity_extractor_get_params(self, instance_name, **kwargs): # noqa: E501
"""Get the editable model parameters of named regex entity extractor. # noqa: E501
Get the editable model parameters of named regex entity extractor. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_get_params(instance_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param str x_caller:
:return: ModelParams
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.regex_entity_extractor_get_params_with_http_info(instance_name, **kwargs) # noqa: E501
else:
(data) = self.regex_entity_extractor_get_params_with_http_info(instance_name, **kwargs) # noqa: E501
return data
def regex_entity_extractor_get_params_with_http_info(self, instance_name, **kwargs): # noqa: E501
"""Get the editable model parameters of named regex entity extractor. # noqa: E501
Get the editable model parameters of named regex entity extractor. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_get_params_with_http_info(instance_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param str x_caller:
:return: ModelParams
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['instance_name', 'x_caller'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method regex_entity_extractor_get_params" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'instance_name' is set
if ('instance_name' not in params or
params['instance_name'] is None):
raise ValueError("Missing the required parameter `instance_name` when calling `regex_entity_extractor_get_params`") # noqa: E501
collection_formats = {}
path_params = {}
if 'instance_name' in params:
path_params['instance_name'] = params['instance_name'] # noqa: E501
query_params = []
header_params = {}
if 'x_caller' in params:
header_params['X-CALLER'] = params['x_caller'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyHeader_old'] # noqa: E501
return self.api_client.call_api(
'/nlu/v2/regex_entity_extractors/{instance_name}/params', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ModelParams', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def regex_entity_extractor_retrieve(self, instance_name, text_input, **kwargs): # noqa: E501
"""Extract information based on the regular expression. # noqa: E501
Extract the entities matching the regular expression. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_retrieve(instance_name, text_input, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param TextInput text_input: The input text. (required)
:param str x_caller:
:return: list[RegexEntity]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.regex_entity_extractor_retrieve_with_http_info(instance_name, text_input, **kwargs) # noqa: E501
else:
(data) = self.regex_entity_extractor_retrieve_with_http_info(instance_name, text_input, **kwargs) # noqa: E501
return data
def regex_entity_extractor_retrieve_with_http_info(self, instance_name, text_input, **kwargs): # noqa: E501
"""Extract information based on the regular expression. # noqa: E501
Extract the entities matching the regular expression. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_retrieve_with_http_info(instance_name, text_input, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param TextInput text_input: The input text. (required)
:param str x_caller:
:return: list[RegexEntity]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['instance_name', 'text_input', 'x_caller'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method regex_entity_extractor_retrieve" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'instance_name' is set
if ('instance_name' not in params or
params['instance_name'] is None):
raise ValueError("Missing the required parameter `instance_name` when calling `regex_entity_extractor_retrieve`") # noqa: E501
# verify the required parameter 'text_input' is set
if ('text_input' not in params or
params['text_input'] is None):
raise ValueError("Missing the required parameter `text_input` when calling `regex_entity_extractor_retrieve`") # noqa: E501
collection_formats = {}
path_params = {}
if 'instance_name' in params:
path_params['instance_name'] = params['instance_name'] # noqa: E501
query_params = []
header_params = {}
if 'x_caller' in params:
header_params['X-CALLER'] = params['x_caller'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'text_input' in params:
body_params = params['text_input']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyHeader_old'] # noqa: E501
return self.api_client.call_api(
'/nlu/v2/regex_entity_extractors/{instance_name}/retrieve', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[RegexEntity]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def regex_entity_extractor_set_params(self, instance_name, model_params, **kwargs): # noqa: E501
"""Set the model parameters of named regex entity extractor. # noqa: E501
Set the model parameters of named regex entity extractor. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_set_params(instance_name, model_params, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param ModelParams model_params: The model parameters. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.regex_entity_extractor_set_params_with_http_info(instance_name, model_params, **kwargs) # noqa: E501
else:
(data) = self.regex_entity_extractor_set_params_with_http_info(instance_name, model_params, **kwargs) # noqa: E501
return data
def regex_entity_extractor_set_params_with_http_info(self, instance_name, model_params, **kwargs): # noqa: E501
"""Set the model parameters of named regex entity extractor. # noqa: E501
Set the model parameters of named regex entity extractor. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_set_params_with_http_info(instance_name, model_params, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param ModelParams model_params: The model parameters. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['instance_name', 'model_params', 'x_caller'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method regex_entity_extractor_set_params" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'instance_name' is set
if ('instance_name' not in params or
params['instance_name'] is None):
raise ValueError("Missing the required parameter `instance_name` when calling `regex_entity_extractor_set_params`") # noqa: E501
# verify the required parameter 'model_params' is set
if ('model_params' not in params or
params['model_params'] is None):
raise ValueError("Missing the required parameter `model_params` when calling `regex_entity_extractor_set_params`") # noqa: E501
collection_formats = {}
path_params = {}
if 'instance_name' in params:
path_params['instance_name'] = params['instance_name'] # noqa: E501
query_params = []
header_params = {}
if 'x_caller' in params:
header_params['X-CALLER'] = params['x_caller'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'model_params' in params:
body_params = params['model_params']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyHeader_old'] # noqa: E501
return self.api_client.call_api(
'/nlu/v2/regex_entity_extractors/{instance_name}/params', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RegexEntityExtractorInstanceDetail', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def regex_entity_extractor_vaporise(self, instance_name, **kwargs): # noqa: E501
"""Vaporise the named model. # noqa: E501
Permanently vaporises a model even if not trashed. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_vaporise(instance_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.regex_entity_extractor_vaporise_with_http_info(instance_name, **kwargs) # noqa: E501
else:
(data) = self.regex_entity_extractor_vaporise_with_http_info(instance_name, **kwargs) # noqa: E501
return data
def regex_entity_extractor_vaporise_with_http_info(self, instance_name, **kwargs): # noqa: E501
"""Vaporise the named model. # noqa: E501
Permanently vaporises a model even if not trashed. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.regex_entity_extractor_vaporise_with_http_info(instance_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str instance_name: The name of the instance. (required)
:param str x_caller:
:return: RegexEntityExtractorInstanceDetail
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['instance_name', 'x_caller'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method regex_entity_extractor_vaporise" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'instance_name' is set
if ('instance_name' not in params or
params['instance_name'] is None):
raise ValueError("Missing the required parameter `instance_name` when calling `regex_entity_extractor_vaporise`") # noqa: E501
collection_formats = {}
path_params = {}
if 'instance_name' in params:
path_params['instance_name'] = params['instance_name'] # noqa: E501
query_params = []
header_params = {}
if 'x_caller' in params:
header_params['X-CALLER'] = params['x_caller'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['APIKeyHeader', 'APIKeyHeader_old'] # noqa: E501
return self.api_client.call_api(
'/nlu/v2/regex_entity_extractors/{instance_name}/vaporise', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RegexEntityExtractorInstanceDetail', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 43.034602 | 180 | 0.637319 | 4,299 | 37,311 | 5.254478 | 0.046755 | 0.048165 | 0.064633 | 0.025499 | 0.96516 | 0.952366 | 0.948913 | 0.941919 | 0.931914 | 0.921422 | 0 | 0.015844 | 0.277666 | 37,311 | 866 | 181 | 43.084296 | 0.822307 | 0.342794 | 0 | 0.798701 | 0 | 0 | 0.221076 | 0.073516 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036797 | false | 0 | 0.008658 | 0 | 0.099567 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b4442a85c982dc1b47cdab365d80d35405d5a2df | 4,737 | py | Python | sdk/python/pulumi_alicloud/hbase/_inputs.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/hbase/_inputs.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/hbase/_inputs.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'InstanceSlbConnAddrArgs',
'InstanceUiProxyConnAddrArgs',
'InstanceZkConnAddrArgs',
]
@pulumi.input_type
class InstanceSlbConnAddrArgs:
def __init__(__self__, *,
conn_addr: Optional[pulumi.Input[str]] = None,
conn_addr_port: Optional[pulumi.Input[str]] = None,
net_type: Optional[pulumi.Input[str]] = None):
if conn_addr is not None:
pulumi.set(__self__, "conn_addr", conn_addr)
if conn_addr_port is not None:
pulumi.set(__self__, "conn_addr_port", conn_addr_port)
if net_type is not None:
pulumi.set(__self__, "net_type", net_type)
@property
@pulumi.getter(name="connAddr")
def conn_addr(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "conn_addr")
@conn_addr.setter
def conn_addr(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "conn_addr", value)
@property
@pulumi.getter(name="connAddrPort")
def conn_addr_port(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "conn_addr_port")
@conn_addr_port.setter
def conn_addr_port(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "conn_addr_port", value)
@property
@pulumi.getter(name="netType")
def net_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "net_type")
@net_type.setter
def net_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "net_type", value)
@pulumi.input_type
class InstanceUiProxyConnAddrArgs:
def __init__(__self__, *,
conn_addr: Optional[pulumi.Input[str]] = None,
conn_addr_port: Optional[pulumi.Input[str]] = None,
net_type: Optional[pulumi.Input[str]] = None):
if conn_addr is not None:
pulumi.set(__self__, "conn_addr", conn_addr)
if conn_addr_port is not None:
pulumi.set(__self__, "conn_addr_port", conn_addr_port)
if net_type is not None:
pulumi.set(__self__, "net_type", net_type)
@property
@pulumi.getter(name="connAddr")
def conn_addr(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "conn_addr")
@conn_addr.setter
def conn_addr(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "conn_addr", value)
@property
@pulumi.getter(name="connAddrPort")
def conn_addr_port(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "conn_addr_port")
@conn_addr_port.setter
def conn_addr_port(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "conn_addr_port", value)
@property
@pulumi.getter(name="netType")
def net_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "net_type")
@net_type.setter
def net_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "net_type", value)
@pulumi.input_type
class InstanceZkConnAddrArgs:
def __init__(__self__, *,
conn_addr: Optional[pulumi.Input[str]] = None,
conn_addr_port: Optional[pulumi.Input[str]] = None,
net_type: Optional[pulumi.Input[str]] = None):
if conn_addr is not None:
pulumi.set(__self__, "conn_addr", conn_addr)
if conn_addr_port is not None:
pulumi.set(__self__, "conn_addr_port", conn_addr_port)
if net_type is not None:
pulumi.set(__self__, "net_type", net_type)
@property
@pulumi.getter(name="connAddr")
def conn_addr(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "conn_addr")
@conn_addr.setter
def conn_addr(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "conn_addr", value)
@property
@pulumi.getter(name="connAddrPort")
def conn_addr_port(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "conn_addr_port")
@conn_addr_port.setter
def conn_addr_port(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "conn_addr_port", value)
@property
@pulumi.getter(name="netType")
def net_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "net_type")
@net_type.setter
def net_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "net_type", value)
| 33.835714 | 87 | 0.652734 | 617 | 4,737 | 4.726094 | 0.110211 | 0.148148 | 0.175926 | 0.203704 | 0.858368 | 0.858368 | 0.858368 | 0.858368 | 0.858368 | 0.858368 | 0 | 0.000271 | 0.221026 | 4,737 | 139 | 88 | 34.079137 | 0.789973 | 0.037365 | 0 | 0.880734 | 1 | 0 | 0.094862 | 0.01581 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192661 | false | 0 | 0.045872 | 0.082569 | 0.348624 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
b4592d2271bbbad5f09e7cccc2416f4a5be8890b | 19,204 | py | Python | tests/client/test_client.py | cfogg/python-client | 40e6891c8240e6b2acd5df538e622e9f15de43d6 | [
"Apache-2.0"
] | 13 | 2017-03-17T15:15:20.000Z | 2022-03-14T22:24:10.000Z | tests/client/test_client.py | cfogg/python-client | 40e6891c8240e6b2acd5df538e622e9f15de43d6 | [
"Apache-2.0"
] | 81 | 2017-01-12T23:06:48.000Z | 2022-02-21T18:20:23.000Z | tests/client/test_client.py | cfogg/python-client | 40e6891c8240e6b2acd5df538e622e9f15de43d6 | [
"Apache-2.0"
] | 14 | 2017-05-25T10:49:13.000Z | 2021-12-27T16:39:20.000Z | """SDK main client test module."""
# pylint: disable=no-self-use,protected-access
import json
import os
from splitio.client.client import Client, _LOGGER as _logger, CONTROL
from splitio.client.factory import SplitFactory
from splitio.engine.evaluator import Evaluator
from splitio.models.impressions import Impression, Label
from splitio.models.events import Event, EventWrapper
from splitio.storage import EventStorage, ImpressionStorage, SegmentStorage, SplitStorage, \
TelemetryStorage
from splitio.storage.inmemmory import InMemorySplitStorage, InMemorySegmentStorage, \
InMemoryImpressionStorage, InMemoryTelemetryStorage, InMemoryEventStorage
from splitio.models import splits, segments
from splitio.engine.impressions import Manager as ImpressionManager
# Recorder
from splitio.recorder.recorder import StandardRecorder
class ClientTests(object): # pylint: disable=too-few-public-methods
"""Split client test cases."""
def test_get_treatment(self, mocker):
"""Test get_treatment execution paths."""
split_storage = mocker.Mock(spec=SplitStorage)
segment_storage = mocker.Mock(spec=SegmentStorage)
impression_storage = mocker.Mock(spec=ImpressionStorage)
event_storage = mocker.Mock(spec=EventStorage)
telemetry_storage = mocker.Mock(spec=TelemetryStorage)
def _get_storage_mock(name):
return {
'splits': split_storage,
'segments': segment_storage,
'impressions': impression_storage,
'events': event_storage,
'telemetry': telemetry_storage
}[name]
destroyed_property = mocker.PropertyMock()
destroyed_property.return_value = False
factory = mocker.Mock(spec=SplitFactory)
factory._get_storage.side_effect = _get_storage_mock
factory._waiting_fork.return_value = False
type(factory).destroyed = destroyed_property
mocker.patch('splitio.client.client.utctime_ms', new=lambda: 1000)
mocker.patch('splitio.client.client.get_latency_bucket_index', new=lambda x: 5)
impmanager = mocker.Mock(spec=ImpressionManager)
recorder = StandardRecorder(impmanager, telemetry_storage, event_storage,
impression_storage)
client = Client(factory, recorder, True)
client._evaluator = mocker.Mock(spec=Evaluator)
client._evaluator.evaluate_feature.return_value = {
'treatment': 'on',
'configurations': None,
'impression': {
'label': 'some_label',
'change_number': 123
},
}
_logger = mocker.Mock()
assert client.get_treatment('some_key', 'some_feature') == 'on'
assert mocker.call(
[(Impression('some_key', 'some_feature', 'on', 'some_label', 123, None, 1000), None)]
) in impmanager.process_impressions.mock_calls
assert mocker.call('sdk.getTreatment', 5) in telemetry_storage.inc_latency.mock_calls
assert _logger.mock_calls == []
# Test with client not ready
ready_property = mocker.PropertyMock()
ready_property.return_value = False
type(factory).ready = ready_property
impmanager.process_impressions.reset_mock()
assert client.get_treatment('some_key', 'some_feature', {'some_attribute': 1}) == 'control'
assert mocker.call(
[(Impression('some_key', 'some_feature', 'control', Label.NOT_READY, mocker.ANY, mocker.ANY, mocker.ANY), {'some_attribute': 1})]
) in impmanager.process_impressions.mock_calls
# Test with exception:
ready_property.return_value = True
split_storage.get_change_number.return_value = -1
def _raise(*_):
raise Exception('something')
client._evaluator.evaluate_feature.side_effect = _raise
assert client.get_treatment('some_key', 'some_feature') == 'control'
assert mocker.call(
[(Impression('some_key', 'some_feature', 'control', 'exception', -1, None, 1000), None)]
) in impmanager.process_impressions.mock_calls
assert len(telemetry_storage.inc_latency.mock_calls) == 3
def test_get_treatment_with_config(self, mocker):
"""Test get_treatment execution paths."""
split_storage = mocker.Mock(spec=SplitStorage)
segment_storage = mocker.Mock(spec=SegmentStorage)
impression_storage = mocker.Mock(spec=ImpressionStorage)
event_storage = mocker.Mock(spec=EventStorage)
telemetry_storage = mocker.Mock(spec=TelemetryStorage)
def _get_storage_mock(name):
return {
'splits': split_storage,
'segments': segment_storage,
'impressions': impression_storage,
'events': event_storage,
'telemetry': telemetry_storage
}[name]
destroyed_property = mocker.PropertyMock()
destroyed_property.return_value = False
factory = mocker.Mock(spec=SplitFactory)
factory._get_storage.side_effect = _get_storage_mock
factory._waiting_fork.return_value = False
type(factory).destroyed = destroyed_property
mocker.patch('splitio.client.client.utctime_ms', new=lambda: 1000)
mocker.patch('splitio.client.client.get_latency_bucket_index', new=lambda x: 5)
impmanager = mocker.Mock(spec=ImpressionManager)
recorder = StandardRecorder(impmanager, telemetry_storage, event_storage,
impression_storage)
client = Client(factory, recorder, True)
client._evaluator = mocker.Mock(spec=Evaluator)
client._evaluator.evaluate_feature.return_value = {
'treatment': 'on',
'configurations': '{"some_config": True}',
'impression': {
'label': 'some_label',
'change_number': 123
}
}
_logger = mocker.Mock()
client._send_impression_to_listener = mocker.Mock()
assert client.get_treatment_with_config(
'some_key',
'some_feature'
) == ('on', '{"some_config": True}')
assert mocker.call(
[(Impression('some_key', 'some_feature', 'on', 'some_label', 123, None, 1000), None)]
) in impmanager.process_impressions.mock_calls
assert mocker.call('sdk.getTreatmentWithConfig', 5) in telemetry_storage.inc_latency.mock_calls
assert _logger.mock_calls == []
# Test with client not ready
ready_property = mocker.PropertyMock()
ready_property.return_value = False
type(factory).ready = ready_property
impmanager.process_impressions.reset_mock()
assert client.get_treatment_with_config('some_key', 'some_feature', {'some_attribute': 1}) == ('control', None)
assert mocker.call(
[(Impression('some_key', 'some_feature', 'control', Label.NOT_READY, mocker.ANY, mocker.ANY, mocker.ANY),
{'some_attribute': 1})]
) in impmanager.process_impressions.mock_calls
# Test with exception:
ready_property.return_value = True
split_storage.get_change_number.return_value = -1
def _raise(*_):
raise Exception('something')
client._evaluator.evaluate_feature.side_effect = _raise
assert client.get_treatment_with_config('some_key', 'some_feature') == ('control', None)
assert mocker.call(
[(Impression('some_key', 'some_feature', 'control', 'exception', -1, None, 1000), None)]
) in impmanager.process_impressions.mock_calls
assert len(telemetry_storage.inc_latency.mock_calls) == 3
def test_get_treatments(self, mocker):
"""Test get_treatment execution paths."""
split_storage = mocker.Mock(spec=SplitStorage)
segment_storage = mocker.Mock(spec=SegmentStorage)
impression_storage = mocker.Mock(spec=ImpressionStorage)
event_storage = mocker.Mock(spec=EventStorage)
telemetry_storage = mocker.Mock(spec=TelemetryStorage)
def _get_storage_mock(name):
return {
'splits': split_storage,
'segments': segment_storage,
'impressions': impression_storage,
'events': event_storage,
'telemetry': telemetry_storage
}[name]
destroyed_property = mocker.PropertyMock()
destroyed_property.return_value = False
factory = mocker.Mock(spec=SplitFactory)
factory._get_storage.side_effect = _get_storage_mock
factory._waiting_fork.return_value = False
type(factory).destroyed = destroyed_property
mocker.patch('splitio.client.client.utctime_ms', new=lambda: 1000)
mocker.patch('splitio.client.client.get_latency_bucket_index', new=lambda x: 5)
impmanager = mocker.Mock(spec=ImpressionManager)
recorder = StandardRecorder(impmanager, telemetry_storage, event_storage,
impression_storage)
client = Client(factory, recorder, True)
client._evaluator = mocker.Mock(spec=Evaluator)
evaluation = {
'treatment': 'on',
'configurations': '{"color": "red"}',
'impression': {
'label': 'some_label',
'change_number': 123
}
}
client._evaluator.evaluate_features.return_value = {
'f1': evaluation,
'f2': evaluation
}
_logger = mocker.Mock()
client._send_impression_to_listener = mocker.Mock()
assert client.get_treatments('key', ['f1', 'f2']) == {'f1': 'on', 'f2': 'on'}
impressions_called = impmanager.process_impressions.mock_calls[0][1][0]
assert (Impression('key', 'f1', 'on', 'some_label', 123, None, 1000), None) in impressions_called
assert (Impression('key', 'f2', 'on', 'some_label', 123, None, 1000), None) in impressions_called
assert mocker.call('sdk.getTreatments', 5) in telemetry_storage.inc_latency.mock_calls
assert _logger.mock_calls == []
# Test with client not ready
ready_property = mocker.PropertyMock()
ready_property.return_value = False
type(factory).ready = ready_property
impmanager.process_impressions.reset_mock()
assert client.get_treatments('some_key', ['some_feature'], {'some_attribute': 1}) == {'some_feature': 'control'}
assert mocker.call(
[(Impression('some_key', 'some_feature', 'control', Label.NOT_READY, mocker.ANY, mocker.ANY, mocker.ANY), {'some_attribute': 1})]
) in impmanager.process_impressions.mock_calls
# Test with exception:
ready_property.return_value = True
split_storage.get_change_number.return_value = -1
def _raise(*_):
raise Exception('something')
client._evaluator.evaluate_features.side_effect = _raise
assert client.get_treatments('key', ['f1', 'f2']) == {'f1': 'control', 'f2': 'control'}
assert len(telemetry_storage.inc_latency.mock_calls) == 2
def test_get_treatments_with_config(self, mocker):
"""Test get_treatment execution paths."""
split_storage = mocker.Mock(spec=SplitStorage)
segment_storage = mocker.Mock(spec=SegmentStorage)
impression_storage = mocker.Mock(spec=ImpressionStorage)
event_storage = mocker.Mock(spec=EventStorage)
telemetry_storage = mocker.Mock(spec=TelemetryStorage)
def _get_storage_mock(name):
return {
'splits': split_storage,
'segments': segment_storage,
'impressions': impression_storage,
'events': event_storage,
'telemetry': telemetry_storage
}[name]
destroyed_property = mocker.PropertyMock()
destroyed_property.return_value = False
factory = mocker.Mock(spec=SplitFactory)
factory._get_storage.side_effect = _get_storage_mock
factory._waiting_fork.return_value = False
type(factory).destroyed = destroyed_property
mocker.patch('splitio.client.client.utctime_ms', new=lambda: 1000)
mocker.patch('splitio.client.client.get_latency_bucket_index', new=lambda x: 5)
impmanager = mocker.Mock(spec=ImpressionManager)
recorder = StandardRecorder(impmanager, telemetry_storage, event_storage,
impression_storage)
client = Client(factory, recorder, True)
client._evaluator = mocker.Mock(spec=Evaluator)
evaluation = {
'treatment': 'on',
'configurations': '{"color": "red"}',
'impression': {
'label': 'some_label',
'change_number': 123
}
}
client._evaluator.evaluate_features.return_value = {
'f1': evaluation,
'f2': evaluation
}
_logger = mocker.Mock()
assert client.get_treatments_with_config('key', ['f1', 'f2']) == {
'f1': ('on', '{"color": "red"}'),
'f2': ('on', '{"color": "red"}')
}
impressions_called = impmanager.process_impressions.mock_calls[0][1][0]
assert (Impression('key', 'f1', 'on', 'some_label', 123, None, 1000), None) in impressions_called
assert (Impression('key', 'f2', 'on', 'some_label', 123, None, 1000), None) in impressions_called
assert mocker.call('sdk.getTreatmentsWithConfig', 5) in telemetry_storage.inc_latency.mock_calls
assert _logger.mock_calls == []
# Test with client not ready
ready_property = mocker.PropertyMock()
ready_property.return_value = False
type(factory).ready = ready_property
impmanager.process_impressions.reset_mock()
assert client.get_treatments_with_config('some_key', ['some_feature'], {'some_attribute': 1}) == {'some_feature': ('control', None)}
assert mocker.call(
[(Impression('some_key', 'some_feature', 'control', Label.NOT_READY, mocker.ANY, mocker.ANY, mocker.ANY), {'some_attribute': 1})]
) in impmanager.process_impressions.mock_calls
# Test with exception:
ready_property.return_value = True
split_storage.get_change_number.return_value = -1
def _raise(*_):
raise Exception('something')
client._evaluator.evaluate_features.side_effect = _raise
assert client.get_treatments_with_config('key', ['f1', 'f2']) == {
'f1': ('control', None),
'f2': ('control', None)
}
assert len(telemetry_storage.inc_latency.mock_calls) == 2
def test_destroy(self, mocker):
"""Test that destroy/destroyed calls are forwarded to the factory."""
split_storage = mocker.Mock(spec=SplitStorage)
segment_storage = mocker.Mock(spec=SegmentStorage)
impression_storage = mocker.Mock(spec=ImpressionStorage)
event_storage = mocker.Mock(spec=EventStorage)
telemetry_storage = mocker.Mock(spec=TelemetryStorage)
def _get_storage_mock(name):
return {
'splits': split_storage,
'segments': segment_storage,
'impressions': impression_storage,
'events': event_storage,
'telemetry': telemetry_storage
}[name]
factory = mocker.Mock(spec=SplitFactory)
destroyed_mock = mocker.PropertyMock()
type(factory).destroyed = destroyed_mock
impmanager = mocker.Mock(spec=ImpressionManager)
recorder = StandardRecorder(impmanager, telemetry_storage, event_storage,
impression_storage)
client = Client(factory, recorder, True)
client.destroy()
assert factory.destroy.mock_calls == [mocker.call()]
assert client.destroyed is not None
assert destroyed_mock.mock_calls == [mocker.call()]
def test_track(self, mocker):
"""Test that destroy/destroyed calls are forwarded to the factory."""
split_storage = mocker.Mock(spec=SplitStorage)
segment_storage = mocker.Mock(spec=SegmentStorage)
impression_storage = mocker.Mock(spec=ImpressionStorage)
event_storage = mocker.Mock(spec=EventStorage)
event_storage.put.return_value = True
telemetry_storage = mocker.Mock(spec=TelemetryStorage)
def _get_storage_mock(name):
return {
'splits': split_storage,
'segments': segment_storage,
'impressions': impression_storage,
'events': event_storage,
'telemetry': telemetry_storage
}[name]
factory = mocker.Mock(spec=SplitFactory)
factory._get_storage = _get_storage_mock
destroyed_mock = mocker.PropertyMock()
destroyed_mock.return_value = False
factory._waiting_fork.return_value = False
type(factory).destroyed = destroyed_mock
factory._apikey = 'test'
mocker.patch('splitio.client.client.utctime_ms', new=lambda: 1000)
impmanager = mocker.Mock(spec=ImpressionManager)
recorder = StandardRecorder(impmanager, telemetry_storage, event_storage,
impression_storage)
client = Client(factory, recorder, True)
assert client.track('key', 'user', 'purchase', 12) is True
assert mocker.call([
EventWrapper(
event=Event('key', 'user', 'purchase', 12, 1000, None),
size=1024
)
]) in event_storage.put.mock_calls
def test_evaluations_before_running_post_fork(self, mocker):
destroyed_property = mocker.PropertyMock()
destroyed_property.return_value = False
factory = mocker.Mock(spec=SplitFactory)
factory._waiting_fork.return_value = True
type(factory).destroyed = destroyed_property
expected_msg = [
mocker.call('Client is not ready - no calls possible')
]
client = Client(factory, mocker.Mock())
_logger = mocker.Mock()
mocker.patch('splitio.client.client._LOGGER', new=_logger)
assert client.get_treatment('some_key', 'some_feature') == CONTROL
assert _logger.error.mock_calls == expected_msg
_logger.reset_mock()
assert client.get_treatment_with_config('some_key', 'some_feature') == (CONTROL, None)
assert _logger.error.mock_calls == expected_msg
_logger.reset_mock()
assert client.track("some_key", "traffic_type", "event_type", None) is False
assert _logger.error.mock_calls == expected_msg
_logger.reset_mock()
assert client.get_treatments(None, ['some_feature']) == {'some_feature': CONTROL}
assert _logger.error.mock_calls == expected_msg
_logger.reset_mock()
assert client.get_treatments_with_config('some_key', ['some_feature']) == {'some_feature': (CONTROL, None)}
assert _logger.error.mock_calls == expected_msg
_logger.reset_mock()
| 44.351039 | 141 | 0.644085 | 1,982 | 19,204 | 5.982341 | 0.081736 | 0.046386 | 0.055495 | 0.053133 | 0.864047 | 0.850805 | 0.850299 | 0.850299 | 0.845323 | 0.839082 | 0 | 0.010414 | 0.249948 | 19,204 | 432 | 142 | 44.453704 | 0.81276 | 0.031816 | 0 | 0.760563 | 0 | 0 | 0.109697 | 0.022975 | 0 | 0 | 0 | 0 | 0.143662 | 1 | 0.047887 | false | 0 | 0.033803 | 0.016901 | 0.101408 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b45a7b7babf48e447bdb7dce25bb5ee7f365f5d8 | 19,177 | py | Python | src/rca/calculate_dbz95.py | josephhardinee/rca | b50ce4557b366553495a7a958d8dc30985a8fbd6 | [
"MIT"
] | 4 | 2020-03-03T14:32:46.000Z | 2021-06-09T08:42:56.000Z | src/rca/calculate_dbz95.py | josephhardinee/rca | b50ce4557b366553495a7a958d8dc30985a8fbd6 | [
"MIT"
] | 1 | 2021-02-17T17:14:07.000Z | 2021-02-17T17:14:07.000Z | src/rca/calculate_dbz95.py | josephhardinee/rca | b50ce4557b366553495a7a958d8dc30985a8fbd6 | [
"MIT"
] | 1 | 2020-03-03T14:32:48.000Z | 2020-03-03T14:32:48.000Z | import numpy as np
from .aux.create_masks import create_az_mask_ppi, create_az_mask_rhi
def calculate_dbz95_ppi(
variable_dictionary,
polarization,
range_limit,
radar_band,
clutter_mask_h,
clutter_mask_v=None,
):
"""
calculate_dbz95_ppi calculates the 95th percentile reflectivity for a given radar PPI file
using the input PPI cluter map masks (H and/or V). Returns the date and time of the file,
95th percentile reflectivity value for Zh and/or Zv, and dictionaries of statistics,
including number of points, histogram/PDF, bins, CDF.
Parameters
----------
variable_dictionary: dict
dictionary with values, strings, and arrays of relevant radar data
i.e. 'reflectivity_h', 'reflectivity_v', 'azimuth', 'range', 'date_time'
polarization: str
specifies for which polarization user wants to create clutter flag array
'dual': calculate for both H and V
'horizontal': calculate only for H
range_limit: int
value of desired radar gate range limit
radar_band: str
one or two letter code for radar band
clutter_mask_h: MaskedArray
masked array denotes which elements are considered clutter
used to extract reflectivity values from overlapping radar gates
for H polarization
clutter_mask_h: MaskedArray
masked array denotes which elements are considered clutter
used to extract reflectivity values from overlapping radar gates
for V polarization
default is None, array must be provided if calculating for V polarization
Returns
-------
date_time: str
date and time of the file
dbz95_h: float (or array?)
value of the 95th percentile clutter area reflectivity for H polarization
stats_h: dict
contains statistics from the PDF and CDF of the clutter area reflectivity in H polarization
num_pts_h: number of points
hn: number of histogram bins
hbins: bin edges of histogram
hp: CDF
dbz95_h: 95th percentile reflectivity
dbz95_v: float (or array?)
value of the 95th percentile clutter area reflectivity for V polarization
stats_v: dict
contains statistics from the PDF and CDF of the clutter area reflectivity in V polarization
num_pts_v: number of points
vn: number of histogram bins
vbins: bin edges of histogram
vp: CDF
dbz95_v: 95th percentile reflectivity
"""
date_time = variable_dictionary["date_time"]
r = variable_dictionary["range"]
theta = variable_dictionary["azimuth"]
zh = variable_dictionary["reflectivity_h"]
date_int = int(date_time[0:4]+date_time[5:7]+date_time[8:10])
###########################
# Special case
# KASACR calibration constant during CACTI
# zh(corrected) = zh(in_file) + zh_offset
# BEFORE 2019-03-18 16:42:33 UTC
# dirt on waveguide, reflectivity low
# zh_offset = 10.6 + difference of RCA going backward in time
# AFTER 2019-03-18 16:42:33 UTC
# waveguide cleaned
# zh_offset = 10.6
#zh_offset = 10.6
#if radar_band == 'ka':
# zh = zh + zh_offset
zh_offset = 0
###########################
range_shape = range_limit / 1000
theta_list = np.arange(360)
r_list = np.arange(range_shape)
# H POLARIZATION
zh_from_mask = []
if radar_band == "ka":
zh = zh + zh_offset
for idx_az, az in enumerate(theta_list):
az_mask = create_az_mask_ppi(az, theta, radar_band)
zh_rays = zh[az_mask, :]
zh_rays = np.ma.getdata(zh_rays)
for idx_ra, ra in enumerate(r_list):
if clutter_mask_h[idx_az, idx_ra]:
if ra == range_shape:
continue
else:
rstart = np.where(r - (ra * 1000.0) >= 0.0)[0][0]
try:
rstop = np.where(r - (r_list[idx_ra + 1] * 1000.0) >= 0.0)[
0
][0]
except IndexError:
rstop = -1
zh_from_mask.append(zh_rays[:, rstart:rstop])
else:
for idx_az, az in enumerate(theta_list):
az_mask = create_az_mask_ppi(az, theta, radar_band)
zh_rays = zh[az_mask, :]
zh_rays = np.ma.getdata(zh_rays)
for idx_ra, ra in enumerate(r_list):
if clutter_mask_h[idx_az, idx_ra]:
if ra == range_shape:
continue
else:
rstart = np.where(r - (ra * 1000.0) >= 0.0)[0][0]
try:
rstop = np.where(r - (r_list[idx_ra + 1] * 1000.0) >= 0.0)[
0
][0]
except IndexError:
rstop = -1
zh_from_mask.append(zh_rays[:, rstart:rstop])
all_zh = []
for i in range(0, len(zh_from_mask)):
if len(zh_from_mask[i]) != 0:
for j in range(0, len(zh_from_mask[i])):
for k in range(0, len(zh_from_mask[i][j])):
all_zh.append(zh_from_mask[i][j][k])
num_pts_h = len(all_zh)
hn, hbins = np.histogram(all_zh, bins=525, range=(-40.0, 65.0))
# Calculate CDF of clutter area reflectivity
hcdf = np.cumsum(hn)
hp = hcdf / hcdf[-1] * 100
x = np.arange(525) * (1 / 5) - 40
# Find the value of reflectivity at the 95th percentile of CDF
idx95 = (np.abs(hp - 95.0)).argmin()
dbz95_h = x[idx95]
stats_h = {
"num_points": num_pts_h,
"histo_n": hn,
"histo_bins": hbins,
"cdf": hp,
"reflectivity_95": dbz95_h,
}
if polarization == "horizontal":
return date_time, stats_h
elif polarization == "dual":
zv = variable_dictionary["reflectivity_v"]
# V POLARIZATION
zv_from_mask = []
for idx_az, az in enumerate(theta_list):
az_mask = create_az_mask_ppi(az, theta, radar_band)
zv_rays = zv[az_mask, :]
zv_rays = np.ma.getdata(zv_rays)
for idx_ra, ra in enumerate(r_list):
if clutter_mask_v[idx_az, idx_ra]:
if ra == range_shape:
continue
else:
rstart = np.where(r - (ra * 1000.0) >= 0.0)[0][0]
try:
rstop = np.where(r - (r_list[idx_ra + 1] * 1000.0) >= 0.0)[
0
][0]
except IndexError:
rstop = -1
zv_from_mask.append(zv_rays[:, rstart:rstop])
all_zv = []
for i in range(0, len(zv_from_mask)):
if len(zv_from_mask[i]) != 0:
for j in range(0, len(zv_from_mask[i])):
for k in range(0, len(zv_from_mask[i][j])):
all_zh.append(zv_from_mask[i][j][k])
num_pts_v = len(all_zv)
vn, vbins = np.histogram(all_zv, bins=525, range=(-40.0, 65.0))
# Calculate CDF of clutter area reflectivity
vcdf = np.cumsum(vn)
vp = vcdf / vcdf[-1] * 100
x = np.arange(525) * (1 / 5) - 40
# Find the value of reflectivity at the 95th percentile of CDF
idx95 = (np.abs(vp - 95.0)).argmin()
dbz95_v = x[idx95]
stats_v = {
"num_points": num_pts_v,
"histo_n": vn,
"histo_bins": vbins,
"cdf": vp,
"reflectivity_95": dbz95_v,
}
return date_time, stats_h, stats_v
def calculate_dbz95_rhi(
variable_dictionary,
polarization,
range_limit,
radar_band,
clutter_mask_h,
clutter_mask_v=None,
):
"""
calculate_dbz95_rhi calculates the 95th percentile reflectivity for a given radar HSRHI file
using the input HSRHI cluter map masks (H and/or V). Returns the date and time of the file,
95th percentile reflectivity value for Zh and/or Zv, and dictionaries of statistics,
including number of points, histogram/PDF, bins, CDF.
Parameters
----------
variable_dictionary: dict
dictionary with values, strings, and arrays of relevant radar data
i.e. 'reflectivity_h', 'reflectivity_v', 'azimuth', 'range', 'date_time', 'elevation'
polarization: str
specifies for which polarization user wants to create clutter flag array
'dual': calculate for both H and V
'horizontal': calculate only for H
range_limit: int
value of desired radar gate range limit
radar_band: str
one or two letter code for radar band
clutter_mask_h: MaskedArray
masked array denotes which elements are considered clutter
used to extract reflectivity values from overlapping radar gates
for H polarization
clutter_mask_h: MaskedArray
masked array denotes which elements are considered clutter
used to extract reflectivity values from overlapping radar gates
for V polarization
default is None, array must be provided if calculating for V polarization
Returns
-------
date_time: str
date and time of the file
dbz95_h: float (or array?)
value of the 95th percentile clutter area reflectivity for H polarization
stats_h: dict
contains statistics from the PDF and CDF of the clutter area reflectivity in H polarization
num_pts_h: number of points
hn: number of histogram bins
hbins: bin edges of histogram
hp: CDF
dbz95_h: 95th percentile reflectivity
dbz95_v: float (or array?)
value of the 95th percentile clutter area reflectivity for V polarization
stats_v: dict
contains statistics from the PDF and CDF of the clutter area reflectivity in V polarization
num_pts_v: number of points
vn: number of histogram bins
vbins: bin edges of histogram
vp: CDF
dbz95_v: 95th percentile reflectivity
"""
date_time = variable_dictionary["date_time"]
r = variable_dictionary["range"]
elev = variable_dictionary["elevation"]
theta = variable_dictionary["azimuth"]
zh = variable_dictionary["reflectivity_h"]
#radar_constant = variable_dictionary["radar_constant_h"]
date_int = int(date_time[0:4]+date_time[5:7]+date_time[8:10])
###########################
# Special case
# KASACR calibration constant during CACTI
# zh(corrected) = zh(in_file) + zh_offset
# BEFORE 2019-03-18 16:42:33 UTC
# dirt on waveguide, reflectivity low
# zh_offset = 10.6 + difference of RCA going backward in time
# AFTER 2019-03-18 16:42:33 UTC
# waveguide cleaned
# zh_offset = 10.6
#zh_offset = 10.6
#if radar_band == 'ka':
# zh = zh + zh_offset
###########################################
# Modify for mrhis and hsrhis after March 7
# CSAPR2
#if radar_band == 'c' and date_int > 20190306:
# theta_list = [90]
#else:
# theta_list = [0, 30, 60, 90, 120, 150]
###########################################
# Modify XSACR Zh to take care
# of radar constant difference
# Correct radar constant: -20.X
#correct_radar_constant = -20.14638
#if radar_band == 'x':
# zh = zh + (correct_radar_constant - radar_constant)
##############################
###########################
#######################################
# Attenutation filtering
# If a ray surpasses a certain threshold of integrated attenutation
# (based on an Ah-Z relationship)
# the whole file is ignored and move on to next file
# i.e. dump out NaN for dbz95
# FOR TESTING, OUTPUT TYPICAL DBZ95, JUST FLAG AS *WOULD BE TRASHED*
# Ah = az^b, where z is in mm6m-3
# Z = 10log10(z) mm6m-3 => dBZ
# z = 10^(Z/10) dBZ => mm6m-3
# ah = a * (10**(zh/10)) **b
#a = 0.000631967738
#b = 0.971513669
a = 0.00115481 #new from JCH Nov 14 2019
b = 0.95361079
gate_width = 0.025 #km
iah_thresh = 0.1
########################################
range_shape = range_limit / 1000
elev_list = [1, 2, 3, 4, 5, 175, 176, 177, 178, 179]
theta_list = [0, 30, 60, 90, 120, 150]
r_list = np.arange(range_shape) + 1
# H POLARIZATION
zh_from_mask = []
if radar_band == 'ka':
iah = []
for idx_az, az in enumerate(theta_list):
if az == 0:
continue
else:
pass
az_mask = create_az_mask_rhi(az, theta, radar_band)
for idx_el, el in enumerate(elev_list):
el_mask = np.abs(elev - el) < 0.5
zh_rays = zh[np.logical_and(az_mask, el_mask), :]
for idx_ra, ra in enumerate(r_list):
if clutter_mask_h[idx_az, idx_el, idx_ra]:
if ra == range_shape:
continue
else:
rstart = np.where(r - (ra * 1000.0) >= 0.0)[0][0]
try:
rstop = np.where(
r - (r_list[idx_ra + 1] * 1000.0) >= 0.0
)[0][0]
except IndexError:
rstop = -1
zh_from_mask.append(zh_rays[:, rstart:rstop])
# Try the CSAPR2 exception (only look at 90-270 degree)
elif radar_band == 'c' and date_int > 20190306:
#print('band = C and date is after March 6')
theta_list = [270]
# Use the modified clutter map, shape(1,10,40)
for idx_az, az in enumerate(theta_list):
az_mask = create_az_mask_rhi(az, theta, radar_band)
for idx_el, el in enumerate(elev_list):
el_mask = np.abs(elev - el) < 0.5
zh_rays = zh[np.logical_and(az_mask, el_mask), :]
zh_rays = np.ma.getdata(zh_rays)
for idx_ra, ra in enumerate(r_list):
if clutter_mask_h[idx_az, idx_el, idx_ra]:
if ra == range_shape:
continue
else:
rstart = np.where(r - (ra * 1000.0) >= 0.0)[0][0]
try:
rstop = np.where(
r - (r_list[idx_ra + 1] * 1000.0) >= 0.0
)[0][0]
except IndexError:
rstop = -1
zh_from_mask.append(zh_rays[:, rstart:rstop])
# Now for any band that is not Ka (C, X)
else:
#print('band = ', radar_band, 'date_int', date_int)
for idx_az, az in enumerate(theta_list):
az_mask = create_az_mask_rhi(az, theta, radar_band)
for idx_el, el in enumerate(elev_list):
el_mask = np.abs(elev - el) < 0.5
zh_rays = zh[np.logical_and(az_mask, el_mask), :]
zh_rays = np.ma.getdata(zh_rays)
for idx_ra, ra in enumerate(r_list):
if clutter_mask_h[idx_az, idx_el, idx_ra]:
if ra == range_shape:
continue
else:
rstart = np.where(r - (ra * 1000.0) >= 0.0)[0][0]
try:
rstop = np.where(
r - (r_list[idx_ra + 1] * 1000.0) >= 0.0
)[0][0]
except IndexError:
rstop = -1
zh_from_mask.append(zh_rays[:, rstart:rstop])
all_zh = []
for i in range(0, len(zh_from_mask)):
if len(zh_from_mask[i]) != 0:
for j in range(0, len(zh_from_mask[i])):
for k in range(0, len(zh_from_mask[i][j])):
all_zh.append(zh_from_mask[i][j][k])
num_pts_h = len(all_zh)
hn, hbins = np.histogram(all_zh, bins=525, range=(-40.0, 65.0))
# Calculate CDF of clutter area reflectivity
hcdf = np.cumsum(hn)
hp = hcdf / hcdf[-1] * 100
x = np.arange(525) * (1 / 5) - 40
# Find the value of reflectivity at the 95th percentile of CDF
idx95 = (np.abs(hp - 95.0)).argmin()
dbz95_h = x[idx95]
if radar_band == 'ka':
stats_h = {
"num_points": num_pts_h,
"histo_n": hn,
"histo_bins": hbins,
"cdf": hp,
"reflectivity_95": dbz95_h,
"pass_filter": pass_filter
}
if polarization == "horizontal":
return date_time, stats_h
elif polarization == "dual":
zv = variable_dictionary["reflectivity_v"]
# V POLARIZATION
zv_from_mask = []
for idx_az, az in enumerate(theta_list):
az_mask = create_az_mask_rhi(az, theta, radar_band)
for idx_el, el in enumerate(elev_list):
el_mask = np.abs(elev - el) < 0.5
zv_rays = zv[np.logical_and(az_mask, el_mask), :]
zv_rays = np.ma.getdata(zv_rays)
for idx_ra, ra in enumerate(r_list):
if clutter_mask_v[idx_az, idx_el, idx_ra]:
if ra == range_shape:
continue
else:
rstart = np.where(r - (ra * 1000.0) >= 0.0)[0][0]
try:
rstop = np.where(
r - (r_list[idx_ra + 1] * 1000.0) >= 0.0
)[0][0]
except IndexError:
rstop = -1
zv_from_mask.append(zv_rays[:, rstart:rstop])
all_zv = []
for i in range(0, len(zv_from_mask)):
if len(zv_from_mask[i]) != 0:
for j in range(0, len(zv_from_mask[i])):
for k in range(0, len(zv_from_mask[i][j])):
all_zh.append(zv_from_mask[i][j][k])
num_pts_v = len(all_zv)
vn, vbins = np.histogram(all_zv, bins=525, range=(-40.0, 65.0))
# Calculate CDF of clutter area reflectivity
vcdf = np.cumsum(vn)
vp = vcdf / vcdf[-1] * 100
x = np.arange(525) * (1 / 5) - 40
# Find the value of reflectivity at the 95th percentile of CDF
idx95 = (np.abs(vp - 95.0)).argmin()
dbz95_v = x[idx95]
stats_v = {
"num_points": num_pts_v,
"histo_n": vn,
"histo_bins": vbins,
"cdf": vp,
"reflectivity_95": dbz95_v,
}
return date_time, stats_h, stats_v
| 37.824458 | 99 | 0.529749 | 2,484 | 19,177 | 3.907407 | 0.114332 | 0.011539 | 0.012982 | 0.011539 | 0.882753 | 0.877808 | 0.873068 | 0.865238 | 0.856378 | 0.82516 | 0 | 0.050842 | 0.365125 | 19,177 | 506 | 100 | 37.899209 | 0.746366 | 0.336966 | 0 | 0.876364 | 0 | 0 | 0.027936 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007273 | false | 0.007273 | 0.007273 | 0 | 0.029091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b48c5329df44f1e700b68721dbede12352e2b86b | 4,927 | py | Python | data_processing/label_mapper.py | andykawabata/Hackathon-Runtime_Terror | 43436723d747c1b7139e0eb2545ec03b7de26f98 | [
"MIT"
] | null | null | null | data_processing/label_mapper.py | andykawabata/Hackathon-Runtime_Terror | 43436723d747c1b7139e0eb2545ec03b7de26f98 | [
"MIT"
] | null | null | null | data_processing/label_mapper.py | andykawabata/Hackathon-Runtime_Terror | 43436723d747c1b7139e0eb2545ec03b7de26f98 | [
"MIT"
] | 2 | 2020-12-19T15:24:23.000Z | 2020-12-23T23:09:30.000Z | import pandas as pd
class LabelMapper:
@staticmethod
def map_to_dictionary():
"""
Uses the 'Meter Names and Labels.xlsx' file to correlate file names and labels
:return: dictionary where the keys are the file names and the values are
the corresponding labels EX {filename1: label1, filename2, label2}
"""
name_labels = pd.read_excel('./data/Meter Names and Labels.xlsx')
filename_labels = {}
for index, row in name_labels.iterrows():
# build file name and add to dict
filename = []
file_prefix = row['Name'].replace("'", "").replace(" ", "")
file_prefix = file_prefix.split('-')[0]
if 'JacksonLibraryTower' in file_prefix:
file_prefix = 'JacksonLibraryTower'
filename.append(file_prefix)
filename.append('_results.csv')
filename = str.join('', filename)
filename = filename.replace(u'\xa0', u'')
# build label and append to dict
long_label = row['Label']
label = long_label.split(' (')[0]
if 'Kaplan Center for Wellness' in long_label:
label = LabelMapper.handle_kaplan(long_label)
if 'Jackson Library' in long_label:
label = LabelMapper.handle_jackson(long_label)
label = label.replace(u'\xa0', u'')
# add to dictionary
filename_labels[filename] = label
return filename_labels
@staticmethod
def map_to_dictionary_reverse():
"""
Uses the 'Meter Names and Labels.xlsx' file to correlate file names and labels
:return: dictionary where the keys are the file names and the values are
the corresponding labels EX {filename1: label1, filename2, label2}
"""
name_labels = pd.read_excel('./data/Meter Names and Labels.xlsx')
label_filenames = {}
for index, row in name_labels.iterrows():
# build file name and add to dict
filename = []
file_prefix = row['Name'].replace("'", "").replace(" ", "")
file_prefix = file_prefix.split('-')[0]
if 'JacksonLibraryTower' in file_prefix:
file_prefix = 'JacksonLibraryTower'
filename.append(file_prefix)
filename.append('_results.csv')
filename = str.join('', filename)
filename = filename.replace(u'\xa0', u'')
# build label and append to dict
long_label = row['Label']
label = long_label.split(' (')[0]
if 'Kaplan Center for Wellness' in long_label:
label = LabelMapper.handle_kaplan(long_label)
if 'Jackson Library' in long_label:
label = LabelMapper.handle_jackson(long_label)
label = label.replace(u'\xa0', u'')
# add to dictionary
label_filenames[label] = filename
return label_filenames
@staticmethod
def map_to_array():
"""
Uses the 'Meter Names and Labels.xlsx' file to correlate file names and labels
:return: array of dictionaries. each dictionary contains a filename
and corresponding label. The keys are 'filename' and 'label'
EX: [{filename: filename1, label: label1}, {filename: filename2, label: label2}]
"""
name_labels = pd.read_excel('./data/Meter Names and Labels.xlsx')
labels_filenames = []
for index, row in name_labels.iterrows():
# build file name and add to dict
pair = {}
filename =[]
file_prefix = row['Name'].replace("'", "").replace(" ", "")
file_prefix = file_prefix.split('-')[0]
if 'JacksonLibraryTower' in file_prefix:
file_prefix = 'JacksonLibraryTower'
filename.append(file_prefix)
filename.append('_results.csv')
filename = str.join('', filename)
pair['filename'] = filename
# build label and append to dict
long_label = row['Label']
label = long_label.split(' (')[0]
if 'Kaplan Center for Wellness' in long_label:
label = LabelMapper.handle_kaplan(long_label)
if 'Jackson Library' in long_label:
label = LabelMapper.handle_jackson(long_label)
pair['label'] = label
# add filename/label pair to list
labels_filenames.append(pair)
pair['filename'] = filename.replace(u'\xa0', '')
return labels_filenames
@staticmethod
def handle_kaplan(long_label):
left = 'Kaplan Center'
right = long_label.split(')')[1]
return left + right
@staticmethod
def handle_jackson(long_label):
left = 'Jackson Library'
right = long_label.split(')')[1]
return left + right
| 39.733871 | 88 | 0.58149 | 540 | 4,927 | 5.164815 | 0.144444 | 0.070993 | 0.045177 | 0.040875 | 0.801721 | 0.770527 | 0.770527 | 0.770527 | 0.745428 | 0.745428 | 0 | 0.007425 | 0.316623 | 4,927 | 123 | 89 | 40.056911 | 0.820909 | 0.199716 | 0 | 0.73494 | 0 | 0 | 0.128151 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060241 | false | 0 | 0.012048 | 0 | 0.144578 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c31acec38245b052497930053e9e25252e85b203 | 270 | py | Python | mlclas/ensemble/__init__.py | markzy/multi-label-classification | e5b10351f2b9f1b3eba7f81b7edb7acad6f313d4 | [
"MIT"
] | 1 | 2016-05-17T03:36:35.000Z | 2016-05-17T03:36:35.000Z | mlclas/ensemble/__init__.py | markzy/multi-label-classification | e5b10351f2b9f1b3eba7f81b7edb7acad6f313d4 | [
"MIT"
] | null | null | null | mlclas/ensemble/__init__.py | markzy/multi-label-classification | e5b10351f2b9f1b3eba7f81b7edb7acad6f313d4 | [
"MIT"
] | null | null | null | from mlclas.ensemble.ensembles import BinaryRelevance, ClassifierChains, CalibratedLabelRanking, RandomKLabelsets, MLKNN
__all__ = ['BinaryRelevance',
'ClassifierChains',
'CalibratedLabelRanking',
'RandomKLabelsets',
'MLKNN'] | 38.571429 | 120 | 0.696296 | 16 | 270 | 11.5 | 0.6875 | 0.336957 | 0.576087 | 0.75 | 0.804348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218519 | 270 | 7 | 121 | 38.571429 | 0.872038 | 0 | 0 | 0 | 0 | 0 | 0.273063 | 0.081181 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c3606a96c43a6bb14d46027d8f0a826c3441799d | 124 | py | Python | dist/Basilisk/simulation/clock_synch/__init__.py | ian-cooke/basilisk_mag | a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14 | [
"0BSD"
] | null | null | null | dist/Basilisk/simulation/clock_synch/__init__.py | ian-cooke/basilisk_mag | a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14 | [
"0BSD"
] | 1 | 2019-03-13T20:52:22.000Z | 2019-03-13T20:52:22.000Z | dist/Basilisk/simulation/clock_synch/__init__.py | ian-cooke/basilisk_mag | a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14 | [
"0BSD"
] | null | null | null | # This __init__.py file for the clock_synch package is automatically generated by the build system
from clock_synch import * | 62 | 98 | 0.830645 | 20 | 124 | 4.85 | 0.85 | 0.206186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145161 | 124 | 2 | 99 | 62 | 0.915094 | 0.774194 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6f0e323f54d5c8cfda1ef6fe208840ee07201f70 | 5,050 | py | Python | CIF.py | IB-NET-Internet-Banking-Network/Entire-Network | 8214f06bd7afb59b1b0792e68343e2cf599464a6 | [
"BSD-3-Clause"
] | 1 | 2021-05-22T03:58:34.000Z | 2021-05-22T03:58:34.000Z | CIF.py | IB-NET-Internet-Banking-Network/Entire-Network | 8214f06bd7afb59b1b0792e68343e2cf599464a6 | [
"BSD-3-Clause"
] | null | null | null | CIF.py | IB-NET-Internet-Banking-Network/Entire-Network | 8214f06bd7afb59b1b0792e68343e2cf599464a6 | [
"BSD-3-Clause"
] | null | null | null | # make document
# Vishnu...thank you for electronics
PutData = open("98765432011.txt", "w")
messageinput0 = ("------------------------------------------------------------------"+"\n")
messageinput1 = ("------------------Customer Information File-----------------------"+"\n")
messageinput2 = ("==> Account Number : "+ "00000000011"+"\n")
messageinput3 = ("==> IFSC code : "+ "RBIS0PFMS01"+"\n")
messageinput4 = ("==> Branch code : "+ "PFMS01"+"\n")
messageinput5 = ("==> Account Type : "+ "Saving type"+"\n")
messageinput6 = ("==> Customer Name : "+ "MANAS KUMAR MISHRA"+"\n")
messageinput7 = ("==> D.O.B : "+ "23/JAN/2000"+"\n")
messageinput8 = ("==> Registered Phone Number : "+ "8xxxxxxx61"+"\n")
messageinput9 = ("==> Email id : "+ "----@----"+"\n")
messageinput10 =("==> Marital Status : "+ "Single"+"\n")
messageinput11 =("==> Current KYC Status : "+ "Student in IIITDM kancheepuram"+"\n")
messageinput12 =("==> Address : "+ "Khajuri khas, New Delhi"+"\n")
messageinput13= ("------------------------------------------------------------------"+"\n")
messageinput14 =("------------------------------------------------------------------"+"\n")
PutData.write(messageinput0)
PutData.write(messageinput1)
PutData.write(messageinput2)
PutData.write(messageinput3)
PutData.write(messageinput4)
PutData.write(messageinput5)
PutData.write(messageinput6)
PutData.write(messageinput7)
PutData.write(messageinput8)
PutData.write(messageinput9)
PutData.write(messageinput10)
PutData.write(messageinput11)
PutData.write(messageinput12)
PutData.write(messageinput13)
PutData.write(messageinput14)
PutData.close()
PutData = open("98765432026.txt", "w")
messageinput0 = ("------------------------------------------------------------------"+"\n")
messageinput1 = ("------------------Customer Information File-----------------------"+"\n")
messageinput2 = ("==> Account Number : "+ "00000000026"+"\n")
messageinput3 = ("==> IFSC code : "+ "RBIS0PFMS01"+"\n")
messageinput4 = ("==> Branch code : "+ "PFMS01"+"\n")
messageinput5 = ("==> Account Type : "+ "Saving type"+"\n")
messageinput6 = ("==> Customer Name : "+ "KARTHIKA RAJESH"+"\n")
messageinput7 = ("==> D.O.B : "+ "13/JUN/2000"+"\n")
messageinput8 = ("==> Registered Phone Number : "+ "9xxxxxxx31"+"\n")
messageinput9 = ("==> Email id : "+ "----@----"+"\n")
messageinput10 =("==> Marital Status : "+ "Single"+"\n")
messageinput11 =("==> Current KYC Status : "+ "Student in IIITDM kancheepuram"+"\n")
messageinput12 =("==> Address : "+ "Bengaluru"+"\n")
messageinput13= ("------------------------------------------------------------------"+"\n")
messageinput14 =("------------------------------------------------------------------"+"\n")
PutData.write(messageinput0)
PutData.write(messageinput1)
PutData.write(messageinput2)
PutData.write(messageinput3)
PutData.write(messageinput4)
PutData.write(messageinput5)
PutData.write(messageinput6)
PutData.write(messageinput7)
PutData.write(messageinput8)
PutData.write(messageinput9)
PutData.write(messageinput10)
PutData.write(messageinput11)
PutData.write(messageinput12)
PutData.write(messageinput13)
PutData.write(messageinput14)
PutData.close()
PutData = open("98765432006.txt", "w")
messageinput0 = ("------------------------------------------------------------------"+"\n")
messageinput1 = ("------------------Customer Information File-----------------------"+"\n")
messageinput2 = ("==> Account Number : "+ "00000000006"+"\n")
messageinput3 = ("==> IFSC code : "+ "RBIS0PFMS01"+"\n")
messageinput4 = ("==> Branch code : "+ "PFMS01"+"\n")
messageinput5 = ("==> Account Type : "+ "Saving type"+"\n")
messageinput6 = ("==> Customer Name : "+ "GANESH T S"+"\n")
messageinput7 = ("==> D.O.B : "+ "06/MAY/2000"+"\n")
messageinput8 = ("==> Registered Phone Number : "+ "8xxxxxxx05"+"\n")
messageinput9 = ("==> Email id : "+ "----@----"+"\n")
messageinput10 =("==> Marital Status : "+ "Single"+"\n")
messageinput11 =("==> Current KYC Status : "+ "Student in IIITDM kancheepuram"+"\n")
messageinput12 =("==> Address : "+ "21 Ashoka Road, New Delhi"+"\n")
messageinput13= ("------------------------------------------------------------------"+"\n")
messageinput14 =("------------------------------------------------------------------"+"\n")
PutData.write(messageinput0)
PutData.write(messageinput1)
PutData.write(messageinput2)
PutData.write(messageinput3)
PutData.write(messageinput4)
PutData.write(messageinput5)
PutData.write(messageinput6)
PutData.write(messageinput7)
PutData.write(messageinput8)
PutData.write(messageinput9)
PutData.write(messageinput10)
PutData.write(messageinput11)
PutData.write(messageinput12)
PutData.write(messageinput13)
PutData.write(messageinput14)
PutData.close()
| 45.089286 | 91 | 0.54 | 394 | 5,050 | 6.92132 | 0.230964 | 0.19802 | 0.018702 | 0.019802 | 0.914925 | 0.896223 | 0.853319 | 0.853319 | 0.853319 | 0.853319 | 0 | 0.053526 | 0.149109 | 5,050 | 111 | 92 | 45.495496 | 0.581103 | 0.009505 | 0 | 0.8125 | 0 | 0 | 0.467093 | 0.15063 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
48b26226c7a4d8fc3b01f96e437f4ac4169185a9 | 2,044 | py | Python | test/functions/decl6.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 1,482 | 2015-10-16T21:59:32.000Z | 2022-03-30T11:44:40.000Z | test/functions/decl6.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 226 | 2015-10-15T15:53:44.000Z | 2022-03-25T03:08:27.000Z | test/functions/decl6.py | kylebarron/MagicPython | da6fa0793e2c85d3bf7709ff1d4f65ccf468db11 | [
"MIT"
] | 129 | 2015-10-20T02:41:49.000Z | 2022-03-22T01:44:36.000Z | def True(): pass
def None(): pass
def False(): pass
def : meta.function.python, source.python, storage.type.function.python
: meta.function.python, source.python
True : keyword.illegal.name.python, meta.function.python, source.python
( : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.begin.python, source.python
) : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.end.python, source.python
: : meta.function.python, punctuation.section.function.begin.python, source.python
: source.python
pass : keyword.control.flow.python, source.python
def : meta.function.python, source.python, storage.type.function.python
: meta.function.python, source.python
None : keyword.illegal.name.python, meta.function.python, source.python
( : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.begin.python, source.python
) : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.end.python, source.python
: : meta.function.python, punctuation.section.function.begin.python, source.python
: source.python
pass : keyword.control.flow.python, source.python
def : meta.function.python, source.python, storage.type.function.python
: meta.function.python, source.python
False : keyword.illegal.name.python, meta.function.python, source.python
( : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.begin.python, source.python
) : meta.function.parameters.python, meta.function.python, punctuation.definition.parameters.end.python, source.python
: : meta.function.python, punctuation.section.function.begin.python, source.python
: source.python
pass : keyword.control.flow.python, source.python
| 65.935484 | 132 | 0.696673 | 225 | 2,044 | 6.328889 | 0.097778 | 0.202247 | 0.303371 | 0.252809 | 0.966994 | 0.966994 | 0.966994 | 0.966994 | 0.966994 | 0.966994 | 0 | 0 | 0.193249 | 2,044 | 30 | 133 | 68.133333 | 0.863554 | 0 | 0 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.222222 | 0 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 12 |
48c1fed3af8cb71a15b8c837f5b000bfb7e8bd5f | 3,306 | py | Python | python/tests/test_s16.py | jameswilddev/Fau | 8d42541740bd838a499f4ca3625701665b1bbcd0 | [
"MIT"
] | null | null | null | python/tests/test_s16.py | jameswilddev/Fau | 8d42541740bd838a499f4ca3625701665b1bbcd0 | [
"MIT"
] | 1 | 2020-06-08T23:03:28.000Z | 2021-01-21T13:16:53.000Z | python/tests/test_s16.py | jameswilddev/Fau | 8d42541740bd838a499f4ca3625701665b1bbcd0 | [
"MIT"
] | null | null | null | import unittest
from fau import S16
class TestS16(unittest.TestCase):
def test_length_negative_throws_error(self):
with self.assertRaises(OverflowError):
S16(-1)
def test_length_zero_returns_correct_length(self):
s16 = S16(0)
self.assertEqual(0, len(s16))
def test_length_correct(self):
s16 = S16(7)
self.assertEqual(7, len(s16))
def test_initialized_as_zeroes(self):
s16 = S16(7)
self.assertEqual(0, s16[0])
self.assertEqual(0, s16[1])
self.assertEqual(0, s16[2])
self.assertEqual(0, s16[3])
self.assertEqual(0, s16[4])
self.assertEqual(0, s16[5])
self.assertEqual(0, s16[6])
def test_get_negative_index_throws_error(self):
s16 = S16(7)
with self.assertRaises(IndexError):
s16[-1]
def test_get_index_out_of_range_throws_error(self):
s16 = S16(7)
with self.assertRaises(IndexError):
s16[7]
def test_set_negative_index_throws_error(self):
s16 = S16(7)
with self.assertRaises(IndexError):
s16[-1] = 5983
def test_set_index_out_of_range_throws_error(self):
s16 = S16(7)
with self.assertRaises(IndexError):
s16[7] = 5983
def test_set_out_of_range_negative_throws_error(self):
s16 = S16(7)
with self.assertRaises(OverflowError):
s16[3] = -32769
def test_set_out_of_range_positive_throws_error(self):
s16 = S16(7)
with self.assertRaises(OverflowError):
s16[3] = 32768
def test_set_minimum_value(self):
s16 = S16(7)
s16[3] = -32768
self.assertEqual(0, s16[0])
self.assertEqual(0, s16[1])
self.assertEqual(0, s16[2])
self.assertEqual(-32768, s16[3])
self.assertEqual(0, s16[4])
self.assertEqual(0, s16[5])
self.assertEqual(0, s16[6])
def test_set_negative(self):
s16 = S16(7)
s16[3] = -5983
self.assertEqual(0, s16[0])
self.assertEqual(0, s16[1])
self.assertEqual(0, s16[2])
self.assertEqual(-5983, s16[3])
self.assertEqual(0, s16[4])
self.assertEqual(0, s16[5])
self.assertEqual(0, s16[6])
def test_set_zero(self):
s16 = S16(7)
s16[3] = 5983
s16[3] = 0
self.assertEqual(0, s16[0])
self.assertEqual(0, s16[1])
self.assertEqual(0, s16[2])
self.assertEqual(0, s16[3])
self.assertEqual(0, s16[4])
self.assertEqual(0, s16[5])
self.assertEqual(0, s16[6])
def test_set_positive(self):
s16 = S16(7)
s16[3] = 5983
self.assertEqual(0, s16[0])
self.assertEqual(0, s16[1])
self.assertEqual(0, s16[2])
self.assertEqual(5983, s16[3])
self.assertEqual(0, s16[4])
self.assertEqual(0, s16[5])
self.assertEqual(0, s16[6])
def test_set_maximum_value(self):
s16 = S16(7)
s16[3] = 32767
self.assertEqual(0, s16[0])
self.assertEqual(0, s16[1])
self.assertEqual(0, s16[2])
self.assertEqual(32767, s16[3])
self.assertEqual(0, s16[4])
self.assertEqual(0, s16[5])
self.assertEqual(0, s16[6])
| 25.045455 | 58 | 0.579855 | 445 | 3,306 | 4.164045 | 0.110112 | 0.356179 | 0.336751 | 0.389638 | 0.820291 | 0.78953 | 0.747976 | 0.716136 | 0.716136 | 0.716136 | 0 | 0.145904 | 0.290986 | 3,306 | 131 | 59 | 25.236641 | 0.644625 | 0 | 0 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.53125 | 1 | 0.15625 | false | 0 | 0.020833 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
48d846b932ef403b3eb877635bff8db3db0d88c1 | 10,322 | py | Python | training_v1_backup/training/SAC/run_sac.py | prasoonpatidar/multiagentRL-resource-sharing | e63ba7fc3c7ab019e9fd109cd45b739e3322152f | [
"MIT"
] | null | null | null | training_v1_backup/training/SAC/run_sac.py | prasoonpatidar/multiagentRL-resource-sharing | e63ba7fc3c7ab019e9fd109cd45b739e3322152f | [
"MIT"
] | null | null | null | training_v1_backup/training/SAC/run_sac.py | prasoonpatidar/multiagentRL-resource-sharing | e63ba7fc3c7ab019e9fd109cd45b739e3322152f | [
"MIT"
] | null | null | null | '''
Main wrapper function to train and evaluate SAC algorithm
'''
import numpy as np
import time
# custom libraries
from training.SAC.run_helper import buyerPenaltiesCalculator, buyerUtilitiesCalculator, action2y, ydiff2action
from training.SAC.run_helper import logger_handle, initialize_agent, get_ys, choose_prob, cumlativeBuyerExp, \
getPurchases
def learn_policy(run_config, seller_info, buyer_info, train_config, logger_pass):
# Initialize the logger
logger = logger_handle(logger_pass)
# get required parameters for WolFPHC algorithm
aux_price_min = 1 / seller_info.max_price
aux_price_max = 1 / seller_info.min_price
logger.info("Fetched raw market information..")
# initialize seller agents
sellers, logger = initialize_agent(seller_info, buyer_info, train_config, logger)
# Get Containers to record history(Interesting insight: append in python list is O(1))
price_history = []
purchase_history = []
provided_resource_history = []
seller_utility_history = []
seller_penalty_history = []
buyer_utility_history = []
buyer_penalty_history = []
# Start Loop for training
logger.info("Starting training iterations...")
start_time = time.time()
env_state = np.random.randint(0, train_config.action_count, seller_info.count)
next_state = np.random.randint(0, train_config.action_count, seller_info.count)
for train_iter in range(0, train_config.iterations):
if train_iter % 10 == 0:
logger.info("Finished %d training iterations in %.3f secs..." % (train_iter, time.time() - start_time))
# get the prices for all seller agents
ydiffActions = []
for tmpSeller in sellers:
ydiffActions.append(tmpSeller.policy_net.get_action(env_state, deterministic=train_config.deterministic))
ydiffActions = np.array(ydiffActions).flatten()
ys = aux_price_min + ydiffActions
probAll, yAll = choose_prob(ys, compare=False, yAll=None)
# Take step in environment: update env state by getting demands from consumers.
# Save prices in history
prices = 1 / ys
price_history.append(prices)
cumulativeBuyerExperience = cumlativeBuyerExp(buyer_info, sellers)
X = getPurchases(buyer_info, cumulativeBuyerExperience, ys, probAll)
# Save purchased history
purchases = X.sum(axis=0)
purchase_history.append(purchases)
# Get Buyer utilities and penalties in history
buyerUtilities = buyerUtilitiesCalculator(X, ys, buyer_info.V, buyer_info.a_val, probAll,
buyer_info.count,
cumulativeBuyerExperience, buyer_info.unfinished_task_penalty)
buyer_utility_history.append(buyerUtilities)
buyerPenalties = buyerPenaltiesCalculator(X, ys, buyer_info.V, buyer_info.a_val, buyer_info.count,
cumulativeBuyerExperience, buyer_info.unfinished_task_penalty)
buyer_penalty_history.append(buyerPenalties)
# get next state based on actions taken in this round
next_state = ydiff2action(ydiffActions,train_config.action_count, aux_price_min,aux_price_max) # actions taken in this round is next state
# Based on demands, calculate reward for all agents, and add observation to agents
seller_utilities = []
seller_penalties = []
seller_provided_resources = []
for j in range(0, seller_info.count):
x_j = X[j]
tmpSellerUtility, tmpSellerPenalty, z_j = sellers[j].reward(x_j, yAll)
reward = tmpSellerUtility + tmpSellerPenalty
# Update seller values
sellers[j].add_purchase_history(x_j, z_j)
seller_utilities.append(tmpSellerUtility)
seller_penalties.append(tmpSellerPenalty)
seller_provided_resources.append(z_j)
# train agent
sellers[j].replay_buffer.push(env_state, [ydiffActions[sellers[j].id]], reward, next_state, False)
if len(sellers[j].replay_buffer) > train_config.batch_size:
for i in range(train_config.update_itr):
_ = sellers[j].update(train_config.batch_size, reward_scale=10.,
auto_entropy=train_config.auto_entropy,
target_entropy=-1. * sellers[j].action_size)
if train_iter % (train_config.update_step_size) == 0:
sellers[j].save_model()
# set current state to next state
env_state = next_state
# Get seller utilties and penalties in history
seller_utilities = np.array(seller_utilities)
seller_penalties = np.array(seller_penalties)
seller_utility_history.append(seller_utilities)
seller_penalty_history.append(seller_penalties)
# update provided resources history
seller_provided_resources = np.array(seller_provided_resources)
provided_resource_history.append(seller_provided_resources)
results_dict = {
'policy_store': train_config.agents_store_dir,
'buyer_info': buyer_info,
'seller_info': seller_info,
'price_history': price_history,
'seller_utilties': seller_utility_history,
'seller_penalties': seller_penalty_history,
'buyer_utilties': buyer_utility_history,
'buyer_penalties': buyer_penalty_history,
'demand_history': purchase_history,
'supply_history': provided_resource_history
}
## Training evaluation
return results_dict
def eval_policy(seller_info, buyer_info, train_config, results_dir, logger_pass):
# Initialize the logger
logger = logger_handle(logger_pass)
# get required parameters for WolFPHC algorithm
aux_price_min = 1 / seller_info.max_price
aux_price_max = 1 / seller_info.min_price
logger.info("Fetched raw market information..")
# set mode to testing
train_config.test = True
# initialize seller agents
sellers, logger = initialize_agent(seller_info, buyer_info, train_config, logger, is_trainer=False)
# Get Containers to record history(Interesting insight: append in python list is O(1))
price_history = []
purchase_history = []
provided_resource_history = []
seller_utility_history = []
seller_penalty_history = []
buyer_utility_history = []
buyer_penalty_history = []
# Start Loop for training
logger.info("Starting training iterations...")
start_time = time.time()
env_state = np.random.randint(0, train_config.action_count, seller_info.count)
next_state = np.random.randint(0, train_config.action_count, seller_info.count)
for eval_iter in range(0, train_config.iterations):
if eval_iter % 10 == 0:
logger.info("Finished %d evaluation iterations in %.3f secs..." % (eval_iter, time.time() - start_time))
# get the prices for all seller agents
ydiffActions = []
for tmpSeller in sellers:
ydiffActions.append(tmpSeller.policy_net.get_action(env_state, deterministic=train_config.deterministic))
ydiffActions = np.array(ydiffActions).flatten()
ys = aux_price_min + ydiffActions
probAll, yAll = choose_prob(ys, compare=False, yAll=None)
# Save prices in history
prices = 1 / ys
price_history.append(prices)
cumulativeBuyerExperience = cumlativeBuyerExp(buyer_info, sellers)
X = getPurchases(buyer_info, cumulativeBuyerExperience, ys, probAll)
# Save purchased history
purchases = X.sum(axis=0)
purchase_history.append(purchases)
# Get Buyer utilities and penalties in history
buyerUtilities = buyerUtilitiesCalculator(X, ys, buyer_info.V, buyer_info.a_val, probAll,
buyer_info.count,
cumulativeBuyerExperience, buyer_info.unfinished_task_penalty)
buyer_utility_history.append(buyerUtilities)
buyerPenalties = buyerPenaltiesCalculator(X, ys, buyer_info.V, buyer_info.a_val, buyer_info.count,
cumulativeBuyerExperience, buyer_info.unfinished_task_penalty)
buyer_penalty_history.append(buyerPenalties)
# get next state based on actions taken in this round
next_state = ydiff2action(ydiffActions,train_config.action_count, aux_price_min,aux_price_max) # actions taken in this round is next state
# Based on demands, calculate reward for all agents, and add observation to agents
seller_utilities = []
seller_penalties = []
seller_provided_resources = []
for j in range(0, seller_info.count):
x_j = X[j]
tmpSellerUtility, tmpSellerPenalty, z_j = sellers[j].reward(x_j,yAll)
reward = tmpSellerUtility+tmpSellerPenalty
# Update seller values
sellers[j].add_purchase_history(x_j, z_j)
seller_utilities.append(tmpSellerUtility)
seller_penalties.append(tmpSellerPenalty)
seller_provided_resources.append(z_j)
# set current state to next state
env_state=next_state
# Get seller utilties and penalties in history
seller_utilities = np.array(seller_utilities)
seller_penalties = np.array(seller_penalties)
seller_utility_history.append(seller_utilities)
seller_penalty_history.append(seller_penalties)
# update provided resources history
seller_provided_resources = np.array(seller_provided_resources)
provided_resource_history.append(seller_provided_resources)
eval_dict = {
'policy_store': train_config.agents_store_dir,
'buyer_info': buyer_info,
'seller_info': seller_info,
'price_history': price_history,
'seller_utilties': seller_utility_history,
'seller_penalties': seller_penalty_history,
'buyer_utilties': buyer_utility_history,
'buyer_penalties': buyer_penalty_history,
'demand_history': purchase_history,
'supply_history': provided_resource_history
}
return eval_dict
| 40.960317 | 147 | 0.679907 | 1,173 | 10,322 | 5.70503 | 0.151748 | 0.037657 | 0.034369 | 0.019725 | 0.872385 | 0.872385 | 0.858936 | 0.846683 | 0.836222 | 0.836222 | 0 | 0.004371 | 0.246367 | 10,322 | 251 | 148 | 41.123506 | 0.855894 | 0.143286 | 0 | 0.807692 | 0 | 0 | 0.055701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012821 | false | 0.025641 | 0.025641 | 0 | 0.051282 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
48f12da47a114055cb3cd7b6f7f3a1fdb6c6bf12 | 250 | py | Python | tests/data/python2_print_function.py | qualname/grey | f2ed7cc92c1b15c1b47919caab2787b465e2766b | [
"MIT"
] | 1 | 2019-07-19T21:03:39.000Z | 2019-07-19T21:03:39.000Z | tests/data/python2_print_function.py | qualname/grey | f2ed7cc92c1b15c1b47919caab2787b465e2766b | [
"MIT"
] | 4 | 2020-03-24T17:09:13.000Z | 2021-06-01T23:45:52.000Z | tests/data/python2_print_function.py | qualname/grey | f2ed7cc92c1b15c1b47919caab2787b465e2766b | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
from __future__ import print_function
print('hello')
print(u'hello')
print(a, file=sys.stderr)
# output
#!/usr/bin/env python2
from __future__ import print_function
print('hello')
print(u'hello')
print(a, file=sys.stderr)
| 14.705882 | 37 | 0.744 | 39 | 250 | 4.512821 | 0.410256 | 0.227273 | 0.102273 | 0.181818 | 0.965909 | 0.965909 | 0.965909 | 0.965909 | 0.965909 | 0.965909 | 0 | 0.008969 | 0.108 | 250 | 16 | 38 | 15.625 | 0.780269 | 0.196 | 0 | 1 | 0 | 0 | 0.10101 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 1 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 11 |
d28eb7426e58f400ad3dc6e747d3b1c932d73314 | 12,797 | py | Python | tests/core/cohort_flow_test.py | X-DataInitiative/SCALPEL-Analysis | 75c02919b28e1bc2daba04718f53cfe541c48a15 | [
"BSD-3-Clause"
] | 7 | 2019-10-18T09:10:40.000Z | 2019-10-21T11:44:23.000Z | tests/core/cohort_flow_test.py | X-DataInitiative/SCALPEL-Analysis | 75c02919b28e1bc2daba04718f53cfe541c48a15 | [
"BSD-3-Clause"
] | 9 | 2019-10-17T11:54:50.000Z | 2020-11-12T15:24:28.000Z | tests/core/cohort_flow_test.py | X-DataInitiative/SCALPEL-Analysis | 75c02919b28e1bc2daba04718f53cfe541c48a15 | [
"BSD-3-Clause"
] | 3 | 2020-01-15T06:46:25.000Z | 2022-03-06T15:35:25.000Z | # License: BSD 3 clause
from scalpel.core.cohort import Cohort
from scalpel.core.cohort_flow import get_steps, cohort_collection_from_cohort_flow
from scalpel.core.cohort_collection import CohortCollection
from scalpel.core.cohort_flow import CohortFlow
from .pyspark_tests import PySparkTest
import pytz
class TestCohortFlow(PySparkTest):
def test_get_steps(self):
"""Test that the parsing of cohorts."""
input = """
{
"intermediate_operations": {
"operation": {
"type": "union",
"name": "outcome",
"parents": ["liberal_fractures", "hospit_fractures"]
}
},
"cohorts": [
"extract_patients",
"exposures",
"filter_patients",
"outcome"
]
}
"""
result = get_steps(input)
expected = ["extract_patients", "exposures", "filter_patients", "outcome"]
self.assertSequenceEqual(result, expected)
def test_cohort_collection_from_cohort_flow(self):
input = """
{
"intermediate_operations": {
"operation": {
"type": "union",
"name": "outcome",
"parents": ["liberal_fractures", "hospit_fractures"]
}
},
"cohorts": [
"extract_patients",
"exposures",
"filter_patients",
"outcome"
]
}
"""
df, _ = self.create_spark_df({"patientID": [1, 2, 3]})
cc = CohortCollection(
{
"liberal_fractures": Cohort(
"liberal_fractures", "liberal_fractures", df, None
),
"hospit_fractures": Cohort(
"hospit_fractures", "hospit_fractures", df, None
),
}
)
result = cohort_collection_from_cohort_flow(cc, input)
self.assertSetEqual(
set(result.cohorts.keys()),
{"liberal_fractures", "hospit_fractures", "outcome"},
)
def test_steps_flowchart(self):
patients = {
"patientID": ["0", "1", "2", "3", "4"], # uuid
"gender": [1, 2, 2, 2, 1], # in {1, 2}
"birthDate": [
pytz.datetime.datetime(1934, 7, 27, tzinfo=pytz.UTC),
pytz.datetime.datetime(1951, 5, 1, tzinfo=pytz.UTC),
pytz.datetime.datetime(1942, 1, 12, tzinfo=pytz.UTC),
pytz.datetime.datetime(1933, 10, 3, tzinfo=pytz.UTC),
pytz.datetime.datetime(1937, 12, 31, tzinfo=pytz.UTC),
],
"deathDate": [
None,
None,
None,
pytz.datetime.datetime(2011, 6, 20, tzinfo=pytz.UTC),
pytz.datetime.datetime(2012, 12, 10, tzinfo=pytz.UTC),
], # can be null
}
exposure_events = {
"patientID": ["0", "10", "4", "2"], # uuid
"start": [
pytz.datetime.datetime(2010, 6, 7, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 3, 28, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 7, 3, tzinfo=pytz.UTC),
pytz.datetime.datetime(2010, 11, 22, tzinfo=pytz.UTC),
],
"end": [
None,
None,
None,
pytz.datetime.datetime(2011, 11, 22, tzinfo=pytz.UTC),
],
"value": ["foo"] * 4,
"category": ["exposure"] * 4,
"groupID": [0] * 4,
"weight": [1] * 4,
}
outcome_events = {
"patientID": ["0", "3", "4", "22"], # uuid
"start": [
pytz.datetime.datetime(2010, 6, 8, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 3, 29, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 7, 4, tzinfo=pytz.UTC),
pytz.datetime.datetime(2010, 11, 23, tzinfo=pytz.UTC),
],
"end": [
None,
None,
None,
pytz.datetime.datetime(2011, 11, 22, tzinfo=pytz.UTC),
],
"value": ["bar"] * 4,
"category": ["outcome"] * 4,
"groupID": [0] * 4,
"weight": [1] * 4,
}
patients_df, _ = self.create_spark_df(patients)
exp_events_df, _ = self.create_spark_df(exposure_events)
out_events_df, _ = self.create_spark_df(outcome_events)
base_population = Cohort(
"base_population", "base_population", patients_df, None
)
exposures = Cohort(
"exposures",
"exposures",
exp_events_df.select("patientID").distinct(),
exp_events_df,
)
outcomes = Cohort(
"outcomes",
"outcomes",
out_events_df.select("patientID").distinct(),
out_events_df,
)
flow = CohortFlow([base_population, exposures, outcomes])
expected_step_1 = {
"patientID": ["0", "1", "2", "3", "4"], # uuid
"gender": [1, 2, 2, 2, 1], # in {1, 2}
"birthDate": [
pytz.datetime.datetime(1934, 7, 27, tzinfo=pytz.UTC),
pytz.datetime.datetime(1951, 5, 1, tzinfo=pytz.UTC),
pytz.datetime.datetime(1942, 1, 12, tzinfo=pytz.UTC),
pytz.datetime.datetime(1933, 10, 3, tzinfo=pytz.UTC),
pytz.datetime.datetime(1937, 12, 31, tzinfo=pytz.UTC),
],
"deathDate": [
None,
None,
None,
pytz.datetime.datetime(2011, 6, 20, tzinfo=pytz.UTC),
pytz.datetime.datetime(2012, 12, 10, tzinfo=pytz.UTC),
], # can be null
}
expected_step_2 = {
"patientID": ["0", "2", "4"], # uuid
"gender": [1, 2, 1], # in {1, 2}
"birthDate": [
pytz.datetime.datetime(1934, 7, 27, tzinfo=pytz.UTC),
pytz.datetime.datetime(1942, 1, 12, tzinfo=pytz.UTC),
pytz.datetime.datetime(1937, 12, 31, tzinfo=pytz.UTC),
],
"deathDate": [
None,
None,
pytz.datetime.datetime(2012, 12, 10, tzinfo=pytz.UTC),
], # can be null
}
expected_step_3 = {
"patientID": ["0", "4"], # uuid
"gender": [1, 1], # in {1, 2}
"birthDate": [
pytz.datetime.datetime(1934, 7, 27, tzinfo=pytz.UTC),
pytz.datetime.datetime(1937, 12, 31, tzinfo=pytz.UTC),
],
"deathDate": [
None,
pytz.datetime.datetime(2012, 12, 10, tzinfo=pytz.UTC),
], # can be null
}
step1_df, _ = self.create_spark_df(expected_step_1)
step2_df, _ = self.create_spark_df(expected_step_2)
step3_df, _ = self.create_spark_df(expected_step_3)
step_1_cohort = Cohort("step1", "step1", step1_df, None)
step_2_cohort = Cohort("step2", "step2", step2_df, None)
step_3_cohort = Cohort("step3", "step3", step3_df, None)
for result, expected in zip(
flow, [step_1_cohort, step_2_cohort, step_3_cohort]
):
self.assertEqual(result, expected)
# Case where the Flowchart has only one element
flow_2 = CohortFlow([base_population])
for result, expected in zip(flow_2, [step_1_cohort]):
self.assertEqual(result, expected)
# Case where the Flowchart is empty
with self.assertWarns(Warning):
CohortFlow([])
def test_prepend_cohort_flowchart(self):
patients = {
"patientID": ["0", "1", "2", "3", "4"], # uuid
"gender": [1, 2, 2, 2, 1], # in {1, 2}
"birthDate": [
pytz.datetime.datetime(1934, 7, 27, tzinfo=pytz.UTC),
pytz.datetime.datetime(1951, 5, 1, tzinfo=pytz.UTC),
pytz.datetime.datetime(1942, 1, 12, tzinfo=pytz.UTC),
pytz.datetime.datetime(1933, 10, 3, tzinfo=pytz.UTC),
pytz.datetime.datetime(1937, 12, 31, tzinfo=pytz.UTC),
],
"deathDate": [
None,
None,
None,
pytz.datetime.datetime(2011, 6, 20, tzinfo=pytz.UTC),
pytz.datetime.datetime(2012, 12, 10, tzinfo=pytz.UTC),
], # can be null
}
exposure_events = {
"patientID": ["0", "10", "4", "2"], # uuid
"start": [
pytz.datetime.datetime(2010, 6, 7, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 3, 28, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 7, 3, tzinfo=pytz.UTC),
pytz.datetime.datetime(2010, 11, 22, tzinfo=pytz.UTC),
],
"end": [
pytz.datetime.datetime(1934, 7, 27, tzinfo=pytz.UTC),
None,
None,
pytz.datetime.datetime(2011, 11, 22, tzinfo=pytz.UTC),
],
"value": ["foo"] * 4,
"category": ["exposure"] * 4,
"groupID": [0] * 4,
"weight": [1] * 4,
}
outcome_events = {
"patientID": ["0", "3", "4", "22"], # uuid
"start": [
pytz.datetime.datetime(2010, 6, 8, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 3, 29, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 7, 4, tzinfo=pytz.UTC),
pytz.datetime.datetime(2010, 11, 23, tzinfo=pytz.UTC),
],
"end": [
pytz.datetime.datetime(1934, 7, 27, tzinfo=pytz.UTC),
None,
None,
pytz.datetime.datetime(2011, 11, 22, tzinfo=pytz.UTC),
],
"value": ["bar"] * 4,
"category": ["outcome"] * 4,
"groupID": [0] * 4,
"weight": [1] * 4,
}
patients_df, _ = self.create_spark_df(patients)
exp_events_df, _ = self.create_spark_df(exposure_events)
out_events_df, _ = self.create_spark_df(outcome_events)
base_population = Cohort(
"base_population", "base_population", patients_df, None
)
exposures = Cohort(
"exposures",
"exposures",
exp_events_df.select("patientID").distinct(),
exp_events_df,
)
outcomes = Cohort(
"outcomes",
"outcomes",
out_events_df.select("patientID").distinct(),
out_events_df,
)
flow = CohortFlow([base_population, exposures])
expected_step_1 = outcome_events
expected_step_2 = {
"patientID": ["0", "3", "4"], # uuid
"start": [
pytz.datetime.datetime(2010, 6, 8, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 3, 29, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 7, 4, tzinfo=pytz.UTC),
],
"end": [pytz.datetime.datetime(1934, 7, 27, tzinfo=pytz.UTC), None, None],
"value": ["bar"] * 3,
"category": ["outcome"] * 3,
"groupID": [0] * 3,
"weight": [1] * 3,
}
expected_step_3 = {
"patientID": ["0", "4"], # uuid
"start": [
pytz.datetime.datetime(2010, 6, 8, tzinfo=pytz.UTC),
pytz.datetime.datetime(2011, 7, 4, tzinfo=pytz.UTC),
],
"end": [pytz.datetime.datetime(1934, 7, 27, tzinfo=pytz.UTC), None],
"value": ["bar"] * 2,
"category": ["outcome"] * 2,
"groupID": [0] * 2,
"weight": [1] * 2,
}
step1_df, _ = self.create_spark_df(expected_step_1)
step2_df, _ = self.create_spark_df(expected_step_2)
step3_df, _ = self.create_spark_df(expected_step_3)
step_1_cohort = Cohort(
"step1", "step1", step1_df.select("patientID").distinct(), step1_df
)
step_2_cohort = Cohort(
"step2", "step2", step2_df.select("patientID").distinct(), step2_df
)
step_3_cohort = Cohort(
"step3", "step3", step3_df.select("patientID").distinct(), step3_df
)
for result, expected in zip(
flow.prepend_cohort(outcomes),
[step_1_cohort, step_2_cohort, step_3_cohort],
):
self.assertEqual(result, expected)
| 35.448753 | 86 | 0.48652 | 1,292 | 12,797 | 4.664861 | 0.101393 | 0.113489 | 0.189149 | 0.093081 | 0.843869 | 0.818152 | 0.787456 | 0.780986 | 0.75842 | 0.745645 | 0 | 0.0715 | 0.375947 | 12,797 | 360 | 87 | 35.547222 | 0.683196 | 0.023521 | 0 | 0.708861 | 0 | 0 | 0.150156 | 0.00754 | 0 | 0 | 0 | 0 | 0.018987 | 1 | 0.012658 | false | 0 | 0.018987 | 0 | 0.03481 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d2b292baae04b639fdf6d12ac7583b2ca3ba8ad8 | 191 | py | Python | gym_pomdp_wrappers/__init__.py | stweigand/gym-pomdp-wrappers | 3d365529b72e912771b878efa26259a7c44ff011 | [
"MIT"
] | 4 | 2020-06-16T20:44:52.000Z | 2021-01-21T15:37:45.000Z | gym_pomdp_wrappers/__init__.py | stweigand/gym-pomdp-wrappers | 3d365529b72e912771b878efa26259a7c44ff011 | [
"MIT"
] | 1 | 2020-06-19T10:50:46.000Z | 2020-06-19T10:50:46.000Z | gym_pomdp_wrappers/__init__.py | stweigand/gym-pomdp-wrappers | 3d365529b72e912771b878efa26259a7c44ff011 | [
"MIT"
] | 1 | 2021-10-01T11:43:16.000Z | 2021-10-01T11:43:16.000Z | from gym_pomdp_wrappers.history_env_mujoco import MuJoCoHistoryEnv
from gym_pomdp_wrappers.history_env_rocksample import RockSampleHistoryEnv
from gym_pomdp_wrappers.noisy_env import NoisyEnv | 63.666667 | 74 | 0.926702 | 26 | 191 | 6.384615 | 0.5 | 0.126506 | 0.216867 | 0.361446 | 0.361446 | 0.361446 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057592 | 191 | 3 | 75 | 63.666667 | 0.922222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d2bb18bbdf621250ce3907b0d84fcc1d8d58d84a | 5,026 | py | Python | otros/insert_pipes.py | Ivanfdezr/CentralSoftware | 8681fedd4814dc60deb527a370411350b40c994c | [
"MIT"
] | null | null | null | otros/insert_pipes.py | Ivanfdezr/CentralSoftware | 8681fedd4814dc60deb527a370411350b40c994c | [
"MIT"
] | 44 | 2021-02-10T23:58:28.000Z | 2021-12-14T02:38:21.000Z | otros/insert_pipes.py | Ivanfdezr/CentralSoftware | 8681fedd4814dc60deb527a370411350b40c994c | [
"MIT"
] | null | null | null | import json
import dbUtils
Y = {}
Y['H40'] = [40,60]
Y['J55'] = [55,75]
Y['K55'] = [55,95]
Y['M65'] = [65,85]
Y['L80'] = [80,95]
Y['N80'] = [80,100]
Y['C90'] = [90,100]
Y['C95'] = [95,105]
Y['T95'] = [95,105]
Y['P110'] = [110,125]
Y['Q125'] = [125,135]
Y['TAC80'] = [80,100]
Y['TAC95'] = [95,110]
Y['TAC110'] = [110,125]
Y['TAC140'] = [140,150]
Y['TRC80'] = [80,95]
Y['TRC95'] = [95,105]
Y['TRC95HC'] = [95,105]
Y['TRC110'] = [110,115]
with open('TR_DB.json','r') as f:
s = f.read()
DB = json.loads(s)
pID = 6001
for db in DB:
grade = db['Grade']
query = """ insert into pipes (pipeID,vendor,grade) values ('{pID}','Tenaris Tamsa','{grade}') """.format(pID=pID,grade=grade)
dbUtils.execute_query(query)
value = db['OD']
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2030',(select u.unitID from units u where u.representation='in'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = db['ID']
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2031',(select u.unitID from units u where u.representation='in'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = db['Weight']
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2032',(select u.unitID from units u where u.representation='lb/ft'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = float(Y[grade][0])*1000
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2033',(select u.unitID from units u where u.representation='psi'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = float(db['Tension'])*1000
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2034',(select u.unitID from units u where u.representation='lbf'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = float(Y[grade][1])*1000
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2035',(select u.unitID from units u where u.representation='psi'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = db['Colapso']
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2036',(select u.unitID from units u where u.representation='psi'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = db['Presion interna']
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2037',(select u.unitID from units u where u.representation='psi'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = db['Presion prueba']
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2038',(select u.unitID from units u where u.representation='psi'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2039',(select u.unitID from units u where u.representation='g/cm³'),'7.85') """.format(pID=pID)
dbUtils.execute_query(query)
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2040',(select u.unitID from units u where u.representation='psi'),'30e6') """.format(pID=pID)
dbUtils.execute_query(query)
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2041',(select u.unitID from units u where u.representation='1'),'0.3') """.format(pID=pID)
dbUtils.execute_query(query)
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2045',(select u.unitID from units u where u.representation='ft'),'40') """.format(pID=pID)
dbUtils.execute_query(query)
value = db['Drift']
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2046',(select u.unitID from units u where u.representation='in'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = db['Thickness']
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2047',(select u.unitID from units u where u.representation='in'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
value = db['Cross-section']
query = """ insert into pipe_properties (pipeID,fieldID,nativeUnitID,valueRepresentation) values
('{pID}','2048',(select u.unitID from units u where u.representation='in²'),'{value}') """.format(pID=pID,value=value)
dbUtils.execute_query(query)
pID+=1
| 43.704348 | 127 | 0.696777 | 683 | 5,026 | 5.077599 | 0.17716 | 0.060554 | 0.073529 | 0.117647 | 0.846886 | 0.846886 | 0.835928 | 0.825548 | 0.825548 | 0.751153 | 0 | 0.052207 | 0.112018 | 5,026 | 114 | 128 | 44.087719 | 0.724849 | 0 | 0 | 0.362637 | 0 | 0.175824 | 0.611741 | 0.338109 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021978 | 0 | 0.021978 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d2bb3f6ca53845e81d43f70e4657b7716a5c88cb | 39,143 | py | Python | msgraph-cli-extensions/v1_0/applications_v1_0/azext_applications_v1_0/vendored_sdks/applications/models/_applications_enums.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | msgraph-cli-extensions/v1_0/applications_v1_0/azext_applications_v1_0/vendored_sdks/applications/models/_applications_enums.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | 22 | 2022-03-29T22:54:37.000Z | 2022-03-29T22:55:27.000Z | msgraph-cli-extensions/v1_0/applications_v1_0/azext_applications_v1_0/vendored_sdks/applications/models/_applications_enums.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from enum import Enum, EnumMeta
from six import with_metaclass
class _CaseInsensitiveEnumMeta(EnumMeta):
def __getitem__(self, name):
return super().__getitem__(name.upper())
def __getattr__(cls, name):
"""Return the enum member matching `name`
We use __getattr__ instead of descriptors or inserting into the enum
class' __dict__ in order to support `name` and `value` being both
properties for enum members (which live in the class' __dict__) and
enum members themselves.
"""
try:
return cls._member_map_[name.upper()]
except KeyError:
raise AttributeError(name)
class Enum10(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
DEFINITION = "definition"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
APPLIES_TO = "appliesTo"
class Enum11(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
APPLIES_TO = "appliesTo"
class Enum12(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum13(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum14(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
class Enum15(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum16(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum17(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
DEFINITION = "definition"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
APPLIES_TO = "appliesTo"
class Enum18(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
APPLIES_TO = "appliesTo"
class Enum19(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum20(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum21(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
DEFINITION = "definition"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
APPLIES_TO = "appliesTo"
class Enum22(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
APPLIES_TO = "appliesTo"
class Enum23(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum24(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
APP_ROLE_ID = "appRoleId"
APP_ROLE_ID_DESC = "appRoleId desc"
CREATED_DATE_TIME = "createdDateTime"
CREATED_DATE_TIME_DESC = "createdDateTime desc"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_DISPLAY_NAME_DESC = "principalDisplayName desc"
PRINCIPAL_ID = "principalId"
PRINCIPAL_ID_DESC = "principalId desc"
PRINCIPAL_TYPE = "principalType"
PRINCIPAL_TYPE_DESC = "principalType desc"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_DISPLAY_NAME_DESC = "resourceDisplayName desc"
RESOURCE_ID = "resourceId"
RESOURCE_ID_DESC = "resourceId desc"
class Enum25(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_ROLE_ID = "appRoleId"
CREATED_DATE_TIME = "createdDateTime"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_ID = "principalId"
PRINCIPAL_TYPE = "principalType"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_ID = "resourceId"
class Enum26(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_ROLE_ID = "appRoleId"
CREATED_DATE_TIME = "createdDateTime"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_ID = "principalId"
PRINCIPAL_TYPE = "principalType"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_ID = "resourceId"
class Enum27(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
ACCOUNT_ENABLED = "accountEnabled"
ACCOUNT_ENABLED_DESC = "accountEnabled desc"
ADD_INS = "addIns"
ADD_INS_DESC = "addIns desc"
ALTERNATIVE_NAMES = "alternativeNames"
ALTERNATIVE_NAMES_DESC = "alternativeNames desc"
APP_DESCRIPTION = "appDescription"
APP_DESCRIPTION_DESC = "appDescription desc"
APP_DISPLAY_NAME = "appDisplayName"
APP_DISPLAY_NAME_DESC = "appDisplayName desc"
APP_ID = "appId"
APP_ID_DESC = "appId desc"
APPLICATION_TEMPLATE_ID = "applicationTemplateId"
APPLICATION_TEMPLATE_ID_DESC = "applicationTemplateId desc"
APP_OWNER_ORGANIZATION_ID = "appOwnerOrganizationId"
APP_OWNER_ORGANIZATION_ID_DESC = "appOwnerOrganizationId desc"
APP_ROLE_ASSIGNMENT_REQUIRED = "appRoleAssignmentRequired"
APP_ROLE_ASSIGNMENT_REQUIRED_DESC = "appRoleAssignmentRequired desc"
APP_ROLES = "appRoles"
APP_ROLES_DESC = "appRoles desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
HOMEPAGE = "homepage"
HOMEPAGE_DESC = "homepage desc"
INFO = "info"
INFO_DESC = "info desc"
KEY_CREDENTIALS = "keyCredentials"
KEY_CREDENTIALS_DESC = "keyCredentials desc"
LOGIN_URL = "loginUrl"
LOGIN_URL_DESC = "loginUrl desc"
LOGOUT_URL = "logoutUrl"
LOGOUT_URL_DESC = "logoutUrl desc"
NOTES = "notes"
NOTES_DESC = "notes desc"
NOTIFICATION_EMAIL_ADDRESSES = "notificationEmailAddresses"
NOTIFICATION_EMAIL_ADDRESSES_DESC = "notificationEmailAddresses desc"
OAUTH2_PERMISSION_SCOPES = "oauth2PermissionScopes"
OAUTH2_PERMISSION_SCOPES_DESC = "oauth2PermissionScopes desc"
PASSWORD_CREDENTIALS = "passwordCredentials"
PASSWORD_CREDENTIALS_DESC = "passwordCredentials desc"
PREFERRED_SINGLE_SIGN_ON_MODE = "preferredSingleSignOnMode"
PREFERRED_SINGLE_SIGN_ON_MODE_DESC = "preferredSingleSignOnMode desc"
PREFERRED_TOKEN_SIGNING_KEY_THUMBPRINT = "preferredTokenSigningKeyThumbprint"
PREFERRED_TOKEN_SIGNING_KEY_THUMBPRINT_DESC = "preferredTokenSigningKeyThumbprint desc"
REPLY_URLS = "replyUrls"
REPLY_URLS_DESC = "replyUrls desc"
SAML_SINGLE_SIGN_ON_SETTINGS = "samlSingleSignOnSettings"
SAML_SINGLE_SIGN_ON_SETTINGS_DESC = "samlSingleSignOnSettings desc"
SERVICE_PRINCIPAL_NAMES = "servicePrincipalNames"
SERVICE_PRINCIPAL_NAMES_DESC = "servicePrincipalNames desc"
SERVICE_PRINCIPAL_TYPE = "servicePrincipalType"
SERVICE_PRINCIPAL_TYPE_DESC = "servicePrincipalType desc"
TAGS = "tags"
TAGS_DESC = "tags desc"
TOKEN_ENCRYPTION_KEY_ID = "tokenEncryptionKeyId"
TOKEN_ENCRYPTION_KEY_ID_DESC = "tokenEncryptionKeyId desc"
class Enum28(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
ACCOUNT_ENABLED = "accountEnabled"
ADD_INS = "addIns"
ALTERNATIVE_NAMES = "alternativeNames"
APP_DESCRIPTION = "appDescription"
APP_DISPLAY_NAME = "appDisplayName"
APP_ID = "appId"
APPLICATION_TEMPLATE_ID = "applicationTemplateId"
APP_OWNER_ORGANIZATION_ID = "appOwnerOrganizationId"
APP_ROLE_ASSIGNMENT_REQUIRED = "appRoleAssignmentRequired"
APP_ROLES = "appRoles"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
HOMEPAGE = "homepage"
INFO = "info"
KEY_CREDENTIALS = "keyCredentials"
LOGIN_URL = "loginUrl"
LOGOUT_URL = "logoutUrl"
NOTES = "notes"
NOTIFICATION_EMAIL_ADDRESSES = "notificationEmailAddresses"
OAUTH2_PERMISSION_SCOPES = "oauth2PermissionScopes"
PASSWORD_CREDENTIALS = "passwordCredentials"
PREFERRED_SINGLE_SIGN_ON_MODE = "preferredSingleSignOnMode"
PREFERRED_TOKEN_SIGNING_KEY_THUMBPRINT = "preferredTokenSigningKeyThumbprint"
REPLY_URLS = "replyUrls"
SAML_SINGLE_SIGN_ON_SETTINGS = "samlSingleSignOnSettings"
SERVICE_PRINCIPAL_NAMES = "servicePrincipalNames"
SERVICE_PRINCIPAL_TYPE = "servicePrincipalType"
TAGS = "tags"
TOKEN_ENCRYPTION_KEY_ID = "tokenEncryptionKeyId"
APP_ROLE_ASSIGNED_TO = "appRoleAssignedTo"
APP_ROLE_ASSIGNMENTS = "appRoleAssignments"
CLAIMS_MAPPING_POLICIES = "claimsMappingPolicies"
CREATED_OBJECTS = "createdObjects"
ENDPOINTS = "endpoints"
HOME_REALM_DISCOVERY_POLICIES = "homeRealmDiscoveryPolicies"
MEMBER_OF = "memberOf"
OAUTH2_PERMISSION_GRANTS = "oauth2PermissionGrants"
OWNED_OBJECTS = "ownedObjects"
OWNERS = "owners"
TOKEN_ISSUANCE_POLICIES = "tokenIssuancePolicies"
TOKEN_LIFETIME_POLICIES = "tokenLifetimePolicies"
TRANSITIVE_MEMBER_OF = "transitiveMemberOf"
class Enum29(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
APP_ROLE_ASSIGNED_TO = "appRoleAssignedTo"
APP_ROLE_ASSIGNMENTS = "appRoleAssignments"
CLAIMS_MAPPING_POLICIES = "claimsMappingPolicies"
CREATED_OBJECTS = "createdObjects"
ENDPOINTS = "endpoints"
HOME_REALM_DISCOVERY_POLICIES = "homeRealmDiscoveryPolicies"
MEMBER_OF = "memberOf"
OAUTH2_PERMISSION_GRANTS = "oauth2PermissionGrants"
OWNED_OBJECTS = "ownedObjects"
OWNERS = "owners"
TOKEN_ISSUANCE_POLICIES = "tokenIssuancePolicies"
TOKEN_LIFETIME_POLICIES = "tokenLifetimePolicies"
TRANSITIVE_MEMBER_OF = "transitiveMemberOf"
class Enum30(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
ACCOUNT_ENABLED = "accountEnabled"
ADD_INS = "addIns"
ALTERNATIVE_NAMES = "alternativeNames"
APP_DESCRIPTION = "appDescription"
APP_DISPLAY_NAME = "appDisplayName"
APP_ID = "appId"
APPLICATION_TEMPLATE_ID = "applicationTemplateId"
APP_OWNER_ORGANIZATION_ID = "appOwnerOrganizationId"
APP_ROLE_ASSIGNMENT_REQUIRED = "appRoleAssignmentRequired"
APP_ROLES = "appRoles"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
HOMEPAGE = "homepage"
INFO = "info"
KEY_CREDENTIALS = "keyCredentials"
LOGIN_URL = "loginUrl"
LOGOUT_URL = "logoutUrl"
NOTES = "notes"
NOTIFICATION_EMAIL_ADDRESSES = "notificationEmailAddresses"
OAUTH2_PERMISSION_SCOPES = "oauth2PermissionScopes"
PASSWORD_CREDENTIALS = "passwordCredentials"
PREFERRED_SINGLE_SIGN_ON_MODE = "preferredSingleSignOnMode"
PREFERRED_TOKEN_SIGNING_KEY_THUMBPRINT = "preferredTokenSigningKeyThumbprint"
REPLY_URLS = "replyUrls"
SAML_SINGLE_SIGN_ON_SETTINGS = "samlSingleSignOnSettings"
SERVICE_PRINCIPAL_NAMES = "servicePrincipalNames"
SERVICE_PRINCIPAL_TYPE = "servicePrincipalType"
TAGS = "tags"
TOKEN_ENCRYPTION_KEY_ID = "tokenEncryptionKeyId"
APP_ROLE_ASSIGNED_TO = "appRoleAssignedTo"
APP_ROLE_ASSIGNMENTS = "appRoleAssignments"
CLAIMS_MAPPING_POLICIES = "claimsMappingPolicies"
CREATED_OBJECTS = "createdObjects"
ENDPOINTS = "endpoints"
HOME_REALM_DISCOVERY_POLICIES = "homeRealmDiscoveryPolicies"
MEMBER_OF = "memberOf"
OAUTH2_PERMISSION_GRANTS = "oauth2PermissionGrants"
OWNED_OBJECTS = "ownedObjects"
OWNERS = "owners"
TOKEN_ISSUANCE_POLICIES = "tokenIssuancePolicies"
TOKEN_LIFETIME_POLICIES = "tokenLifetimePolicies"
TRANSITIVE_MEMBER_OF = "transitiveMemberOf"
class Enum31(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
APP_ROLE_ASSIGNED_TO = "appRoleAssignedTo"
APP_ROLE_ASSIGNMENTS = "appRoleAssignments"
CLAIMS_MAPPING_POLICIES = "claimsMappingPolicies"
CREATED_OBJECTS = "createdObjects"
ENDPOINTS = "endpoints"
HOME_REALM_DISCOVERY_POLICIES = "homeRealmDiscoveryPolicies"
MEMBER_OF = "memberOf"
OAUTH2_PERMISSION_GRANTS = "oauth2PermissionGrants"
OWNED_OBJECTS = "ownedObjects"
OWNERS = "owners"
TOKEN_ISSUANCE_POLICIES = "tokenIssuancePolicies"
TOKEN_LIFETIME_POLICIES = "tokenLifetimePolicies"
TRANSITIVE_MEMBER_OF = "transitiveMemberOf"
class Enum32(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
APP_ROLE_ID = "appRoleId"
APP_ROLE_ID_DESC = "appRoleId desc"
CREATED_DATE_TIME = "createdDateTime"
CREATED_DATE_TIME_DESC = "createdDateTime desc"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_DISPLAY_NAME_DESC = "principalDisplayName desc"
PRINCIPAL_ID = "principalId"
PRINCIPAL_ID_DESC = "principalId desc"
PRINCIPAL_TYPE = "principalType"
PRINCIPAL_TYPE_DESC = "principalType desc"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_DISPLAY_NAME_DESC = "resourceDisplayName desc"
RESOURCE_ID = "resourceId"
RESOURCE_ID_DESC = "resourceId desc"
class Enum33(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_ROLE_ID = "appRoleId"
CREATED_DATE_TIME = "createdDateTime"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_ID = "principalId"
PRINCIPAL_TYPE = "principalType"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_ID = "resourceId"
class Enum34(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_ROLE_ID = "appRoleId"
CREATED_DATE_TIME = "createdDateTime"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_ID = "principalId"
PRINCIPAL_TYPE = "principalType"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_ID = "resourceId"
class Enum35(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
APP_ROLE_ID = "appRoleId"
APP_ROLE_ID_DESC = "appRoleId desc"
CREATED_DATE_TIME = "createdDateTime"
CREATED_DATE_TIME_DESC = "createdDateTime desc"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_DISPLAY_NAME_DESC = "principalDisplayName desc"
PRINCIPAL_ID = "principalId"
PRINCIPAL_ID_DESC = "principalId desc"
PRINCIPAL_TYPE = "principalType"
PRINCIPAL_TYPE_DESC = "principalType desc"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_DISPLAY_NAME_DESC = "resourceDisplayName desc"
RESOURCE_ID = "resourceId"
RESOURCE_ID_DESC = "resourceId desc"
class Enum36(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_ROLE_ID = "appRoleId"
CREATED_DATE_TIME = "createdDateTime"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_ID = "principalId"
PRINCIPAL_TYPE = "principalType"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_ID = "resourceId"
class Enum37(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_ROLE_ID = "appRoleId"
CREATED_DATE_TIME = "createdDateTime"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_ID = "principalId"
PRINCIPAL_TYPE = "principalType"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_ID = "resourceId"
class Enum38(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum39(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
DEFINITION = "definition"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
APPLIES_TO = "appliesTo"
class Enum40(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
APPLIES_TO = "appliesTo"
class Enum41(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum42(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum43(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
class Enum44(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum45(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
CAPABILITY = "capability"
CAPABILITY_DESC = "capability desc"
PROVIDER_ID = "providerId"
PROVIDER_ID_DESC = "providerId desc"
PROVIDER_NAME = "providerName"
PROVIDER_NAME_DESC = "providerName desc"
PROVIDER_RESOURCE_ID = "providerResourceId"
PROVIDER_RESOURCE_ID_DESC = "providerResourceId desc"
URI = "uri"
URI_DESC = "uri desc"
class Enum46(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
CAPABILITY = "capability"
PROVIDER_ID = "providerId"
PROVIDER_NAME = "providerName"
PROVIDER_RESOURCE_ID = "providerResourceId"
URI = "uri"
class Enum47(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
CAPABILITY = "capability"
PROVIDER_ID = "providerId"
PROVIDER_NAME = "providerName"
PROVIDER_RESOURCE_ID = "providerResourceId"
URI = "uri"
class Enum48(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum49(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
DEFINITION = "definition"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
APPLIES_TO = "appliesTo"
class Enum5(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
class Enum50(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
APPLIES_TO = "appliesTo"
class Enum51(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum52(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum53(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
class Enum54(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum55(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
CLIENT_ID = "clientId"
CLIENT_ID_DESC = "clientId desc"
CONSENT_TYPE = "consentType"
CONSENT_TYPE_DESC = "consentType desc"
PRINCIPAL_ID = "principalId"
PRINCIPAL_ID_DESC = "principalId desc"
RESOURCE_ID = "resourceId"
RESOURCE_ID_DESC = "resourceId desc"
SCOPE = "scope"
SCOPE_DESC = "scope desc"
class Enum56(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
CLIENT_ID = "clientId"
CONSENT_TYPE = "consentType"
PRINCIPAL_ID = "principalId"
RESOURCE_ID = "resourceId"
SCOPE = "scope"
class Enum57(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
CLIENT_ID = "clientId"
CLIENT_ID_DESC = "clientId desc"
CONSENT_TYPE = "consentType"
CONSENT_TYPE_DESC = "consentType desc"
PRINCIPAL_ID = "principalId"
PRINCIPAL_ID_DESC = "principalId desc"
RESOURCE_ID = "resourceId"
RESOURCE_ID_DESC = "resourceId desc"
SCOPE = "scope"
SCOPE_DESC = "scope desc"
class Enum58(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum59(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
class Enum6(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
APP_DISPLAY_NAME = "appDisplayName"
APP_DISPLAY_NAME_DESC = "appDisplayName desc"
DATA_TYPE = "dataType"
DATA_TYPE_DESC = "dataType desc"
IS_SYNCED_FROM_ON_PREMISES = "isSyncedFromOnPremises"
IS_SYNCED_FROM_ON_PREMISES_DESC = "isSyncedFromOnPremises desc"
NAME = "name"
NAME_DESC = "name desc"
TARGET_OBJECTS = "targetObjects"
TARGET_OBJECTS_DESC = "targetObjects desc"
class Enum60(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum61(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum62(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
class Enum63(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum64(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum65(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
DEFINITION = "definition"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
APPLIES_TO = "appliesTo"
class Enum66(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
APPLIES_TO = "appliesTo"
class Enum67(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum68(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum69(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
DEFINITION = "definition"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
APPLIES_TO = "appliesTo"
class Enum7(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_DISPLAY_NAME = "appDisplayName"
DATA_TYPE = "dataType"
IS_SYNCED_FROM_ON_PREMISES = "isSyncedFromOnPremises"
NAME = "name"
TARGET_OBJECTS = "targetObjects"
class Enum70(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
APPLIES_TO = "appliesTo"
class Enum71(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Enum72(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum73(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
class Enum74(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
class Enum75(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
APP_ROLE_ID = "appRoleId"
APP_ROLE_ID_DESC = "appRoleId desc"
CREATED_DATE_TIME = "createdDateTime"
CREATED_DATE_TIME_DESC = "createdDateTime desc"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_DISPLAY_NAME_DESC = "principalDisplayName desc"
PRINCIPAL_ID = "principalId"
PRINCIPAL_ID_DESC = "principalId desc"
PRINCIPAL_TYPE = "principalType"
PRINCIPAL_TYPE_DESC = "principalType desc"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_DISPLAY_NAME_DESC = "resourceDisplayName desc"
RESOURCE_ID = "resourceId"
RESOURCE_ID_DESC = "resourceId desc"
class Enum76(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_ROLE_ID = "appRoleId"
CREATED_DATE_TIME = "createdDateTime"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_ID = "principalId"
PRINCIPAL_TYPE = "principalType"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_ID = "resourceId"
class Enum77(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_ROLE_ID = "appRoleId"
CREATED_DATE_TIME = "createdDateTime"
PRINCIPAL_DISPLAY_NAME = "principalDisplayName"
PRINCIPAL_ID = "principalId"
PRINCIPAL_TYPE = "principalType"
RESOURCE_DISPLAY_NAME = "resourceDisplayName"
RESOURCE_ID = "resourceId"
class Enum8(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
APP_DISPLAY_NAME = "appDisplayName"
DATA_TYPE = "dataType"
IS_SYNCED_FROM_ON_PREMISES = "isSyncedFromOnPremises"
NAME = "name"
TARGET_OBJECTS = "targetObjects"
class Enum9(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
DEFINITION = "definition"
DEFINITION_DESC = "definition desc"
IS_ORGANIZATION_DEFAULT = "isOrganizationDefault"
IS_ORGANIZATION_DEFAULT_DESC = "isOrganizationDefault desc"
class Get1ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
ADD_INS = "addIns"
API = "api"
APP_ID = "appId"
APPLICATION_TEMPLATE_ID = "applicationTemplateId"
APP_ROLES = "appRoles"
CREATED_DATE_TIME = "createdDateTime"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
GROUP_MEMBERSHIP_CLAIMS = "groupMembershipClaims"
IDENTIFIER_URIS = "identifierUris"
INFO = "info"
IS_DEVICE_ONLY_AUTH_SUPPORTED = "isDeviceOnlyAuthSupported"
IS_FALLBACK_PUBLIC_CLIENT = "isFallbackPublicClient"
KEY_CREDENTIALS = "keyCredentials"
LOGO = "logo"
NOTES = "notes"
OAUTH2_REQUIRE_POST_RESPONSE = "oauth2RequirePostResponse"
OPTIONAL_CLAIMS = "optionalClaims"
PARENTAL_CONTROL_SETTINGS = "parentalControlSettings"
PASSWORD_CREDENTIALS = "passwordCredentials"
PUBLIC_CLIENT = "publicClient"
PUBLISHER_DOMAIN = "publisherDomain"
REQUIRED_RESOURCE_ACCESS = "requiredResourceAccess"
SIGN_IN_AUDIENCE = "signInAudience"
TAGS = "tags"
TOKEN_ENCRYPTION_KEY_ID = "tokenEncryptionKeyId"
WEB = "web"
CREATED_ON_BEHALF_OF = "createdOnBehalfOf"
EXTENSION_PROPERTIES = "extensionProperties"
HOME_REALM_DISCOVERY_POLICIES = "homeRealmDiscoveryPolicies"
OWNERS = "owners"
TOKEN_ISSUANCE_POLICIES = "tokenIssuancePolicies"
TOKEN_LIFETIME_POLICIES = "tokenLifetimePolicies"
class Get2ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
CREATED_ON_BEHALF_OF = "createdOnBehalfOf"
EXTENSION_PROPERTIES = "extensionProperties"
HOME_REALM_DISCOVERY_POLICIES = "homeRealmDiscoveryPolicies"
OWNERS = "owners"
TOKEN_ISSUANCE_POLICIES = "tokenIssuancePolicies"
TOKEN_LIFETIME_POLICIES = "tokenLifetimePolicies"
class Get5ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DELETED_DATE_TIME = "deletedDateTime"
DELETED_DATE_TIME_DESC = "deletedDateTime desc"
ADD_INS = "addIns"
ADD_INS_DESC = "addIns desc"
API = "api"
API_DESC = "api desc"
APP_ID = "appId"
APP_ID_DESC = "appId desc"
APPLICATION_TEMPLATE_ID = "applicationTemplateId"
APPLICATION_TEMPLATE_ID_DESC = "applicationTemplateId desc"
APP_ROLES = "appRoles"
APP_ROLES_DESC = "appRoles desc"
CREATED_DATE_TIME = "createdDateTime"
CREATED_DATE_TIME_DESC = "createdDateTime desc"
DESCRIPTION = "description"
DESCRIPTION_DESC = "description desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
GROUP_MEMBERSHIP_CLAIMS = "groupMembershipClaims"
GROUP_MEMBERSHIP_CLAIMS_DESC = "groupMembershipClaims desc"
IDENTIFIER_URIS = "identifierUris"
IDENTIFIER_URIS_DESC = "identifierUris desc"
INFO = "info"
INFO_DESC = "info desc"
IS_DEVICE_ONLY_AUTH_SUPPORTED = "isDeviceOnlyAuthSupported"
IS_DEVICE_ONLY_AUTH_SUPPORTED_DESC = "isDeviceOnlyAuthSupported desc"
IS_FALLBACK_PUBLIC_CLIENT = "isFallbackPublicClient"
IS_FALLBACK_PUBLIC_CLIENT_DESC = "isFallbackPublicClient desc"
KEY_CREDENTIALS = "keyCredentials"
KEY_CREDENTIALS_DESC = "keyCredentials desc"
LOGO = "logo"
LOGO_DESC = "logo desc"
NOTES = "notes"
NOTES_DESC = "notes desc"
OAUTH2_REQUIRE_POST_RESPONSE = "oauth2RequirePostResponse"
OAUTH2_REQUIRE_POST_RESPONSE_DESC = "oauth2RequirePostResponse desc"
OPTIONAL_CLAIMS = "optionalClaims"
OPTIONAL_CLAIMS_DESC = "optionalClaims desc"
PARENTAL_CONTROL_SETTINGS = "parentalControlSettings"
PARENTAL_CONTROL_SETTINGS_DESC = "parentalControlSettings desc"
PASSWORD_CREDENTIALS = "passwordCredentials"
PASSWORD_CREDENTIALS_DESC = "passwordCredentials desc"
PUBLIC_CLIENT = "publicClient"
PUBLIC_CLIENT_DESC = "publicClient desc"
PUBLISHER_DOMAIN = "publisherDomain"
PUBLISHER_DOMAIN_DESC = "publisherDomain desc"
REQUIRED_RESOURCE_ACCESS = "requiredResourceAccess"
REQUIRED_RESOURCE_ACCESS_DESC = "requiredResourceAccess desc"
SIGN_IN_AUDIENCE = "signInAudience"
SIGN_IN_AUDIENCE_DESC = "signInAudience desc"
TAGS = "tags"
TAGS_DESC = "tags desc"
TOKEN_ENCRYPTION_KEY_ID = "tokenEncryptionKeyId"
TOKEN_ENCRYPTION_KEY_ID_DESC = "tokenEncryptionKeyId desc"
WEB = "web"
WEB_DESC = "web desc"
class Get6ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DELETED_DATE_TIME = "deletedDateTime"
ADD_INS = "addIns"
API = "api"
APP_ID = "appId"
APPLICATION_TEMPLATE_ID = "applicationTemplateId"
APP_ROLES = "appRoles"
CREATED_DATE_TIME = "createdDateTime"
DESCRIPTION = "description"
DISPLAY_NAME = "displayName"
GROUP_MEMBERSHIP_CLAIMS = "groupMembershipClaims"
IDENTIFIER_URIS = "identifierUris"
INFO = "info"
IS_DEVICE_ONLY_AUTH_SUPPORTED = "isDeviceOnlyAuthSupported"
IS_FALLBACK_PUBLIC_CLIENT = "isFallbackPublicClient"
KEY_CREDENTIALS = "keyCredentials"
LOGO = "logo"
NOTES = "notes"
OAUTH2_REQUIRE_POST_RESPONSE = "oauth2RequirePostResponse"
OPTIONAL_CLAIMS = "optionalClaims"
PARENTAL_CONTROL_SETTINGS = "parentalControlSettings"
PASSWORD_CREDENTIALS = "passwordCredentials"
PUBLIC_CLIENT = "publicClient"
PUBLISHER_DOMAIN = "publisherDomain"
REQUIRED_RESOURCE_ACCESS = "requiredResourceAccess"
SIGN_IN_AUDIENCE = "signInAudience"
TAGS = "tags"
TOKEN_ENCRYPTION_KEY_ID = "tokenEncryptionKeyId"
WEB = "web"
CREATED_ON_BEHALF_OF = "createdOnBehalfOf"
EXTENSION_PROPERTIES = "extensionProperties"
HOME_REALM_DISCOVERY_POLICIES = "homeRealmDiscoveryPolicies"
OWNERS = "owners"
TOKEN_ISSUANCE_POLICIES = "tokenIssuancePolicies"
TOKEN_LIFETIME_POLICIES = "tokenLifetimePolicies"
class Get7ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
CREATED_ON_BEHALF_OF = "createdOnBehalfOf"
EXTENSION_PROPERTIES = "extensionProperties"
HOME_REALM_DISCOVERY_POLICIES = "homeRealmDiscoveryPolicies"
OWNERS = "owners"
TOKEN_ISSUANCE_POLICIES = "tokenIssuancePolicies"
TOKEN_LIFETIME_POLICIES = "tokenLifetimePolicies"
| 35.423529 | 94 | 0.741205 | 3,815 | 39,143 | 7.232241 | 0.089646 | 0.034214 | 0.053278 | 0.110253 | 0.87833 | 0.87061 | 0.859193 | 0.849262 | 0.849262 | 0.832844 | 0 | 0.005292 | 0.174463 | 39,143 | 1,104 | 95 | 35.455616 | 0.848549 | 0.018394 | 0 | 0.863588 | 0 | 0 | 0.30073 | 0.079619 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002148 | false | 0.008593 | 0.002148 | 0.001074 | 0.996778 | 0.004296 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
960203bfef12b074256f166ade169d6dc937f180 | 2,486 | py | Python | src/brgy175/user_account/decorators.py | arjanlangit/brgy175 | 32f91a03c0d8d30472ad8b13e5ce0e47d229e4f8 | [
"bzip2-1.0.6"
] | 2 | 2019-11-08T14:37:16.000Z | 2019-11-08T14:37:19.000Z | src/brgy175/user_account/decorators.py | arjanlangit/brgy175 | 32f91a03c0d8d30472ad8b13e5ce0e47d229e4f8 | [
"bzip2-1.0.6"
] | null | null | null | src/brgy175/user_account/decorators.py | arjanlangit/brgy175 | 32f91a03c0d8d30472ad8b13e5ce0e47d229e4f8 | [
"bzip2-1.0.6"
] | 2 | 2019-10-08T05:40:39.000Z | 2020-03-02T20:28:22.000Z | from functools import wraps
from django.http import HttpResponseRedirect
from django.contrib.auth.models import User
from django.contrib.auth import authenticate
from django.contrib.auth.mixins import AccessMixin
class superadmin_katarungan_only(AccessMixin):
def dispatch(self, request, *args, **kwargs):
profile = request.user
if not profile.sector == 'Katarungan' and not profile.sector == 'superadmin' :
return self.handle_no_permission()
return super().dispatch(request, *args, **kwargs)
class superadmin_bpso_only(AccessMixin):
def dispatch(self, request, *args, **kwargs):
profile = request.user
if not profile.sector == 'bpso' and not profile.sector == 'superadmin' :
return self.handle_no_permission()
return super().dispatch(request, *args, **kwargs)
class superadmin_vawc_only(AccessMixin):
def dispatch(self, request, *args, **kwargs):
profile = request.user
if not profile.sector == 'vawc' and not profile.sector == 'superadmin' :
return self.handle_no_permission()
return super().dispatch(request, *args, **kwargs)
class superadmin_senior_only(AccessMixin):
def dispatch(self, request, *args, **kwargs):
profile = request.user
if not profile.sector == 'bpso' and not profile.sector == 'superadmin' :
return self.handle_no_permission()
return super().dispatch(request, *args, **kwargs)
class superadmin_resident_only(AccessMixin):
def dispatch(self, request, *args, **kwargs):
profile = request.user
if not profile.sector == 'resident' and not profile.sector == 'superadmin' :
return self.handle_no_permission()
return super().dispatch(request, *args, **kwargs)
class superadmin_bpso_only(AccessMixin):
def dispatch(self, request, *args, **kwargs):
profile = request.user
if not profile.sector == 'bpso' and not profile.sector == 'superadmin' :
return self.handle_no_permission()
return super().dispatch(request, *args, **kwargs)
class superadmin_badac_only(AccessMixin):
def dispatch(self, request, *args, **kwargs):
profile = request.user
if not profile.sector == 'badac' and not profile.sector == 'superadmin' :
return self.handle_no_permission()
return super().dispatch(request, *args, **kwargs) | 43.614035 | 88 | 0.65527 | 274 | 2,486 | 5.843066 | 0.142336 | 0.09619 | 0.148657 | 0.113679 | 0.838226 | 0.838226 | 0.838226 | 0.838226 | 0.838226 | 0.838226 | 0 | 0 | 0.234111 | 2,486 | 57 | 89 | 43.614035 | 0.840861 | 0 | 0 | 0.702128 | 0 | 0 | 0.043828 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148936 | false | 0 | 0.106383 | 0 | 0.702128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
9615db31e2da9306acab1d86978db8008f88f9b7 | 2,532 | py | Python | app/views/forms/signup.py | marcusosso/uwhvz | c20303c117e8b2fcd04f5901326054296d3f3caf | [
"MIT"
] | 9 | 2018-09-08T06:59:02.000Z | 2022-03-23T08:12:02.000Z | app/views/forms/signup.py | marcusosso/uwhvz | c20303c117e8b2fcd04f5901326054296d3f3caf | [
"MIT"
] | 24 | 2018-07-14T15:49:48.000Z | 2020-07-21T12:53:55.000Z | app/views/forms/signup.py | marcusosso/uwhvz | c20303c117e8b2fcd04f5901326054296d3f3caf | [
"MIT"
] | 6 | 2019-03-07T02:55:27.000Z | 2019-11-10T23:26:44.000Z | from django import forms
class UserSignupForm(forms.Form):
first_name = forms.CharField(
label="First name",
widget=forms.TextInput(
attrs={
'class': 'ui-input'
}
)
)
last_name = forms.CharField(
label="Last name",
widget=forms.TextInput(
attrs={
'class': 'ui-input'
}
)
)
password1 = forms.CharField(
label="Enter Password",
widget=forms.PasswordInput(
attrs={
'class': 'ui-input',
}
)
)
password2 = forms.CharField(
label="Confirm Password",
widget=forms.PasswordInput(
attrs={
'class': 'ui-input',
}
)
)
def clean_password2(self):
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 and password2 and password1 != password2:
raise forms.ValidationError(
"The passwords entered do not match.",
code='passwords_do_not_match'
)
return password2
class UnrestrictedUserSignupForm(forms.Form):
email = forms.EmailField(
label="Email",
widget=forms.EmailInput(
attrs={
'class': 'ui-input'
}
)
)
first_name = forms.CharField(
label="First name",
widget=forms.TextInput(
attrs={
'class': 'ui-input'
}
)
)
last_name = forms.CharField(
label="Last name",
widget=forms.TextInput(
attrs={
'class': 'ui-input'
}
)
)
password1 = forms.CharField(
label="Enter Password",
widget=forms.PasswordInput(
attrs={
'class': 'ui-input',
}
)
)
password2 = forms.CharField(
label="Confirm Password",
widget=forms.PasswordInput(
attrs={
'class': 'ui-input',
}
)
)
def clean_password2(self):
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 and password2 and password1 != password2:
raise forms.ValidationError(
"The passwords entered do not match.",
code='passwords_do_not_match'
)
return password2
| 23.663551 | 62 | 0.495656 | 210 | 2,532 | 5.9 | 0.214286 | 0.079903 | 0.087167 | 0.123487 | 0.873285 | 0.873285 | 0.873285 | 0.873285 | 0.873285 | 0.873285 | 0 | 0.015831 | 0.401264 | 2,532 | 106 | 63 | 23.886792 | 0.801451 | 0 | 0 | 0.623656 | 0 | 0 | 0.14613 | 0.017378 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021505 | false | 0.27957 | 0.010753 | 0 | 0.172043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
8269d229a16e3c078ee10cbae7705909ef4a0949 | 9,654 | py | Python | tests/unit/game/test_game_service.py | hmajid2301/banter-bus-management-api | d51a40c2d5254d4197cbe5bb84aa576df2c24893 | [
"Apache-2.0"
] | null | null | null | tests/unit/game/test_game_service.py | hmajid2301/banter-bus-management-api | d51a40c2d5254d4197cbe5bb84aa576df2c24893 | [
"Apache-2.0"
] | null | null | null | tests/unit/game/test_game_service.py | hmajid2301/banter-bus-management-api | d51a40c2d5254d4197cbe5bb84aa576df2c24893 | [
"Apache-2.0"
] | null | null | null | from typing import List
import pytest
from pytest_mock import MockFixture
from app.game.game_exceptions import (
GameExistsException,
GameNotFound,
InvalidGameFilter,
)
from app.game.game_models import Game
from app.game.game_service import GameService
from tests.unit.factories import GameFactory
from tests.unit.game.fake_game_repository import FakeGameRepository
from tests.unit.game.game_service_data import (
disable_game_data,
enable_game_data,
get_game_name_data,
update_enable_status_data,
)
@pytest.fixture(autouse=True)
def mock_beanie_document(mocker: MockFixture):
mocker.patch("beanie.odm.documents.Document.get_settings")
@pytest.mark.asyncio
async def test_add_game():
game_repository = FakeGameRepository(games=[])
game_service = GameService(game_repository=game_repository)
game_name = "quibly"
rules_url = "http://example.com/rules"
description = "A really fun game"
display_name = "Quibly"
minimum_players = 4
maximum_players = 16
game = await game_service.add(
game_name=game_name,
rules_url=rules_url,
description=description,
display_name=display_name,
minimum_players=minimum_players,
maximum_players=maximum_players,
)
expected_game = Game(
name=game_name,
rules_url=rules_url,
enabled=True,
description=description,
display_name=display_name,
minimum_players=minimum_players,
maximum_players=maximum_players,
)
assert game == expected_game
@pytest.mark.asyncio
async def test_add_game_that_exists():
game_name = "quibly"
rules_url = "http://example.com/rules"
description = "A really fun game"
display_name = "Quibly"
minimum_players = 4
maximum_players = 16
existing_game = Game(
name=game_name,
rules_url=rules_url,
enabled=True,
description=description,
display_name=display_name,
minimum_players=minimum_players,
maximum_players=maximum_players,
)
game_repository = FakeGameRepository(games=[existing_game])
game_service = GameService(game_repository=game_repository)
with pytest.raises(GameExistsException):
await game_service.add(
game_name=game_name,
rules_url=rules_url,
description=description,
display_name=display_name,
minimum_players=minimum_players,
maximum_players=maximum_players,
)
@pytest.mark.asyncio
async def test_add_game_game_name_is_unique():
game_name = "quibly"
rules_url = "http://example.com/rules"
description = "A really fun game"
display_name = "Quibly"
minimum_players = 4
maximum_players = 16
existing_game = Game(
name=game_name,
rules_url=rules_url,
enabled=True,
description=description,
display_name=display_name,
minimum_players=minimum_players,
maximum_players=maximum_players,
)
game_repository = FakeGameRepository(games=[existing_game])
game_service = GameService(game_repository=game_repository)
game_name = "quibly2"
rules_url = "http://example.com/rules"
description = "A really fun game"
display_name = "Quibly"
game = await game_service.add(
game_name=game_name,
rules_url=rules_url,
description=description,
display_name=display_name,
minimum_players=4,
maximum_players=16,
)
expected_game = Game(
name=game_name,
rules_url=rules_url,
enabled=True,
description=description,
display_name=display_name,
minimum_players=4,
maximum_players=16,
)
assert game == expected_game
@pytest.mark.asyncio
async def test_remove_game():
game_name = "quibly"
rules_url = "http://example.com/rules"
description = "A really fun game"
display_name = "Quibly"
minimum_players = 4
maximum_players = 16
existing_game = Game(
name=game_name,
rules_url=rules_url,
enabled=True,
description=description,
display_name=display_name,
minimum_players=minimum_players,
maximum_players=maximum_players,
)
game_repository = FakeGameRepository(games=[existing_game])
game_service = GameService(game_repository=game_repository)
await game_service.remove(game_name=game_name)
with pytest.raises(GameNotFound):
await game_repository.get(game_name=game_name)
@pytest.mark.asyncio
async def test_remove_game_does_not_exist():
game_name = "quibly"
rules_url = "http://example.com/rules"
description = "A really fun game"
display_name = "Quibly"
minimum_players = 4
maximum_players = 16
existing_game = Game(
name=game_name,
rules_url=rules_url,
enabled=True,
description=description,
display_name=display_name,
minimum_players=minimum_players,
maximum_players=maximum_players,
)
game_repository = FakeGameRepository(games=[existing_game])
game_service = GameService(game_repository=game_repository)
game_name = "quiblyv2"
with pytest.raises(GameNotFound):
await game_service.remove(game_name=game_name)
@pytest.mark.asyncio
async def test_remove_game_no_game_exists():
game_repository = FakeGameRepository(games=[])
game_service = GameService(game_repository=game_repository)
game_name = "quibly"
with pytest.raises(GameNotFound):
await game_service.remove(game_name=game_name)
@pytest.mark.asyncio
async def test_get_game():
game_name = "quibly"
rules_url = "http://example.com/rules"
description = "A really fun game"
display_name = "Quibly"
minimum_players = 4
maximum_players = 16
existing_game = Game(
name=game_name,
rules_url=rules_url,
enabled=True,
description=description,
display_name=display_name,
minimum_players=minimum_players,
maximum_players=maximum_players,
)
game_repository = FakeGameRepository(games=[existing_game])
game_service = GameService(game_repository=game_repository)
game = await game_service.get(game_name=game_name)
assert game == existing_game
@pytest.mark.asyncio
async def test_get_game_does_not_exist():
game_repository = FakeGameRepository(games=[])
game_service = GameService(game_repository=game_repository)
with pytest.raises(GameNotFound):
await game_service.get(game_name="quibly")
@pytest.mark.asyncio
@pytest.mark.parametrize(
"factory_boy_args, enabled_filter, expected_result",
get_game_name_data,
ids=[
"get all games",
"get all enabled games",
"get all disabled games (none)",
"get all disabled games",
"get all games (mixed enabled state)",
"get all disabled games (single)",
"get all enabled games (single)",
],
)
async def test_get_game_names(factory_boy_args: dict, enabled_filter: str, expected_result: List[str]):
existing_games = GameFactory.build_batch(**factory_boy_args)
game_repository = FakeGameRepository(games=existing_games)
game_service = GameService(game_repository=game_repository)
games = await game_service.get_game_names(enabled_filter=enabled_filter)
assert games == expected_result
@pytest.mark.asyncio
async def test_get_game_names_invalid_filer():
existing_games = GameFactory.build_batch(3)
game_repository = FakeGameRepository(games=existing_games)
game_service = GameService(game_repository=game_repository)
with pytest.raises(InvalidGameFilter):
await game_service.get_game_names(enabled_filter="invalid")
@pytest.mark.asyncio
@pytest.mark.parametrize(
"factory_boy_args, game_name",
enable_game_data,
ids=[
"enable disabled game (all games disabled)",
"enable enabled game (all games enabled)",
"enable disabled game",
"enable enabled game",
],
)
async def test_enable_game(factory_boy_args: dict, game_name: str):
existing_games = GameFactory.build_batch(**factory_boy_args)
game_repository = FakeGameRepository(games=existing_games)
game_service = GameService(game_repository=game_repository)
game = await game_service.update_enabled_status(game_name=game_name, enabled=True)
assert game.enabled is True
@pytest.mark.asyncio
@pytest.mark.parametrize(
"factory_boy_args, game_name",
disable_game_data,
ids=[
"disable enabled game (all games enabled)",
"disable enabled game (all games disabled)",
"disable enabled game",
"disable disabled game",
],
)
async def test_disable_game(factory_boy_args: dict, game_name: str):
existing_games = GameFactory.build_batch(**factory_boy_args)
game_repository = FakeGameRepository(games=existing_games)
game_service = GameService(game_repository=game_repository)
game = await game_service.update_enabled_status(game_name=game_name, enabled=False)
assert game.enabled is False
@pytest.mark.asyncio
@pytest.mark.parametrize(
"enabled_status",
update_enable_status_data,
ids=[
"enable game",
"disable game",
],
)
async def test_enable_status_game_does_not_exist(enabled_status):
existing_games = GameFactory.build_batch(3)
game_repository = FakeGameRepository(games=existing_games)
game_service = GameService(game_repository=game_repository)
with pytest.raises(GameNotFound):
await game_service.update_enabled_status(game_name="quibly_v3", enabled=enabled_status)
| 29.888545 | 103 | 0.712451 | 1,143 | 9,654 | 5.702537 | 0.090989 | 0.063823 | 0.055232 | 0.041731 | 0.801473 | 0.776312 | 0.756981 | 0.756367 | 0.720313 | 0.700675 | 0 | 0.003775 | 0.204268 | 9,654 | 322 | 104 | 29.981366 | 0.844702 | 0 | 0 | 0.705882 | 0 | 0 | 0.104827 | 0.004351 | 0 | 0 | 0 | 0 | 0.022059 | 1 | 0.003676 | false | 0 | 0.033088 | 0 | 0.036765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
82a3e6e5e39f1d9e59edf60462c85c470ef55ab1 | 61,278 | py | Python | sdk/python/pulumi_aws/dms/replication_instance.py | chivandikwa/pulumi-aws | 19c08bf9dcb90544450ffa4eec7bf6751058fde2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/dms/replication_instance.py | chivandikwa/pulumi-aws | 19c08bf9dcb90544450ffa4eec7bf6751058fde2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/dms/replication_instance.py | chivandikwa/pulumi-aws | 19c08bf9dcb90544450ffa4eec7bf6751058fde2 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['ReplicationInstanceArgs', 'ReplicationInstance']
@pulumi.input_type
class ReplicationInstanceArgs:
def __init__(__self__, *,
replication_instance_class: pulumi.Input[str],
replication_instance_id: pulumi.Input[str],
allocated_storage: Optional[pulumi.Input[int]] = None,
allow_major_version_upgrade: Optional[pulumi.Input[bool]] = None,
apply_immediately: Optional[pulumi.Input[bool]] = None,
auto_minor_version_upgrade: Optional[pulumi.Input[bool]] = None,
availability_zone: Optional[pulumi.Input[str]] = None,
engine_version: Optional[pulumi.Input[str]] = None,
kms_key_arn: Optional[pulumi.Input[str]] = None,
multi_az: Optional[pulumi.Input[bool]] = None,
preferred_maintenance_window: Optional[pulumi.Input[str]] = None,
publicly_accessible: Optional[pulumi.Input[bool]] = None,
replication_subnet_group_id: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a ReplicationInstance resource.
:param pulumi.Input[str] replication_instance_class: The compute and memory capacity of the replication instance as specified by the replication instance class. See [AWS DMS User Guide](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html) for available instance sizes and advice on which one to choose.
:param pulumi.Input[str] replication_instance_id: The replication instance identifier. This parameter is stored as a lowercase string.
:param pulumi.Input[int] allocated_storage: The amount of storage (in gigabytes) to be initially allocated for the replication instance.
:param pulumi.Input[bool] allow_major_version_upgrade: Indicates that major version upgrades are allowed.
:param pulumi.Input[bool] apply_immediately: Indicates whether the changes should be applied immediately or during the next maintenance window. Only used when updating an existing resource.
:param pulumi.Input[bool] auto_minor_version_upgrade: Indicates that minor engine upgrades will be applied automatically to the replication instance during the maintenance window.
:param pulumi.Input[str] availability_zone: The EC2 Availability Zone that the replication instance will be created in.
:param pulumi.Input[str] engine_version: The engine version number of the replication instance.
:param pulumi.Input[str] kms_key_arn: The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.
:param pulumi.Input[bool] multi_az: Specifies if the replication instance is a multi-az deployment. You cannot set the `availability_zone` parameter if the `multi_az` parameter is set to `true`.
:param pulumi.Input[str] preferred_maintenance_window: The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
:param pulumi.Input[bool] publicly_accessible: Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address.
:param pulumi.Input[str] replication_subnet_group_id: A subnet group to associate with the replication instance.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Sequence[pulumi.Input[str]]] vpc_security_group_ids: A list of VPC security group IDs to be used with the replication instance. The VPC security groups must work with the VPC containing the replication instance.
"""
pulumi.set(__self__, "replication_instance_class", replication_instance_class)
pulumi.set(__self__, "replication_instance_id", replication_instance_id)
if allocated_storage is not None:
pulumi.set(__self__, "allocated_storage", allocated_storage)
if allow_major_version_upgrade is not None:
pulumi.set(__self__, "allow_major_version_upgrade", allow_major_version_upgrade)
if apply_immediately is not None:
pulumi.set(__self__, "apply_immediately", apply_immediately)
if auto_minor_version_upgrade is not None:
pulumi.set(__self__, "auto_minor_version_upgrade", auto_minor_version_upgrade)
if availability_zone is not None:
pulumi.set(__self__, "availability_zone", availability_zone)
if engine_version is not None:
pulumi.set(__self__, "engine_version", engine_version)
if kms_key_arn is not None:
pulumi.set(__self__, "kms_key_arn", kms_key_arn)
if multi_az is not None:
pulumi.set(__self__, "multi_az", multi_az)
if preferred_maintenance_window is not None:
pulumi.set(__self__, "preferred_maintenance_window", preferred_maintenance_window)
if publicly_accessible is not None:
pulumi.set(__self__, "publicly_accessible", publicly_accessible)
if replication_subnet_group_id is not None:
pulumi.set(__self__, "replication_subnet_group_id", replication_subnet_group_id)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if vpc_security_group_ids is not None:
pulumi.set(__self__, "vpc_security_group_ids", vpc_security_group_ids)
@property
@pulumi.getter(name="replicationInstanceClass")
def replication_instance_class(self) -> pulumi.Input[str]:
"""
The compute and memory capacity of the replication instance as specified by the replication instance class. See [AWS DMS User Guide](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html) for available instance sizes and advice on which one to choose.
"""
return pulumi.get(self, "replication_instance_class")
@replication_instance_class.setter
def replication_instance_class(self, value: pulumi.Input[str]):
pulumi.set(self, "replication_instance_class", value)
@property
@pulumi.getter(name="replicationInstanceId")
def replication_instance_id(self) -> pulumi.Input[str]:
"""
The replication instance identifier. This parameter is stored as a lowercase string.
"""
return pulumi.get(self, "replication_instance_id")
@replication_instance_id.setter
def replication_instance_id(self, value: pulumi.Input[str]):
pulumi.set(self, "replication_instance_id", value)
@property
@pulumi.getter(name="allocatedStorage")
def allocated_storage(self) -> Optional[pulumi.Input[int]]:
"""
The amount of storage (in gigabytes) to be initially allocated for the replication instance.
"""
return pulumi.get(self, "allocated_storage")
@allocated_storage.setter
def allocated_storage(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "allocated_storage", value)
@property
@pulumi.getter(name="allowMajorVersionUpgrade")
def allow_major_version_upgrade(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates that major version upgrades are allowed.
"""
return pulumi.get(self, "allow_major_version_upgrade")
@allow_major_version_upgrade.setter
def allow_major_version_upgrade(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "allow_major_version_upgrade", value)
@property
@pulumi.getter(name="applyImmediately")
def apply_immediately(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether the changes should be applied immediately or during the next maintenance window. Only used when updating an existing resource.
"""
return pulumi.get(self, "apply_immediately")
@apply_immediately.setter
def apply_immediately(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "apply_immediately", value)
@property
@pulumi.getter(name="autoMinorVersionUpgrade")
def auto_minor_version_upgrade(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates that minor engine upgrades will be applied automatically to the replication instance during the maintenance window.
"""
return pulumi.get(self, "auto_minor_version_upgrade")
@auto_minor_version_upgrade.setter
def auto_minor_version_upgrade(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "auto_minor_version_upgrade", value)
@property
@pulumi.getter(name="availabilityZone")
def availability_zone(self) -> Optional[pulumi.Input[str]]:
"""
The EC2 Availability Zone that the replication instance will be created in.
"""
return pulumi.get(self, "availability_zone")
@availability_zone.setter
def availability_zone(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "availability_zone", value)
@property
@pulumi.getter(name="engineVersion")
def engine_version(self) -> Optional[pulumi.Input[str]]:
"""
The engine version number of the replication instance.
"""
return pulumi.get(self, "engine_version")
@engine_version.setter
def engine_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "engine_version", value)
@property
@pulumi.getter(name="kmsKeyArn")
def kms_key_arn(self) -> Optional[pulumi.Input[str]]:
"""
The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.
"""
return pulumi.get(self, "kms_key_arn")
@kms_key_arn.setter
def kms_key_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kms_key_arn", value)
@property
@pulumi.getter(name="multiAz")
def multi_az(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies if the replication instance is a multi-az deployment. You cannot set the `availability_zone` parameter if the `multi_az` parameter is set to `true`.
"""
return pulumi.get(self, "multi_az")
@multi_az.setter
def multi_az(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "multi_az", value)
@property
@pulumi.getter(name="preferredMaintenanceWindow")
def preferred_maintenance_window(self) -> Optional[pulumi.Input[str]]:
"""
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
"""
return pulumi.get(self, "preferred_maintenance_window")
@preferred_maintenance_window.setter
def preferred_maintenance_window(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "preferred_maintenance_window", value)
@property
@pulumi.getter(name="publiclyAccessible")
def publicly_accessible(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address.
"""
return pulumi.get(self, "publicly_accessible")
@publicly_accessible.setter
def publicly_accessible(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "publicly_accessible", value)
@property
@pulumi.getter(name="replicationSubnetGroupId")
def replication_subnet_group_id(self) -> Optional[pulumi.Input[str]]:
"""
A subnet group to associate with the replication instance.
"""
return pulumi.get(self, "replication_subnet_group_id")
@replication_subnet_group_id.setter
def replication_subnet_group_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "replication_subnet_group_id", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="vpcSecurityGroupIds")
def vpc_security_group_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of VPC security group IDs to be used with the replication instance. The VPC security groups must work with the VPC containing the replication instance.
"""
return pulumi.get(self, "vpc_security_group_ids")
@vpc_security_group_ids.setter
def vpc_security_group_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "vpc_security_group_ids", value)
@pulumi.input_type
class _ReplicationInstanceState:
def __init__(__self__, *,
allocated_storage: Optional[pulumi.Input[int]] = None,
allow_major_version_upgrade: Optional[pulumi.Input[bool]] = None,
apply_immediately: Optional[pulumi.Input[bool]] = None,
auto_minor_version_upgrade: Optional[pulumi.Input[bool]] = None,
availability_zone: Optional[pulumi.Input[str]] = None,
engine_version: Optional[pulumi.Input[str]] = None,
kms_key_arn: Optional[pulumi.Input[str]] = None,
multi_az: Optional[pulumi.Input[bool]] = None,
preferred_maintenance_window: Optional[pulumi.Input[str]] = None,
publicly_accessible: Optional[pulumi.Input[bool]] = None,
replication_instance_arn: Optional[pulumi.Input[str]] = None,
replication_instance_class: Optional[pulumi.Input[str]] = None,
replication_instance_id: Optional[pulumi.Input[str]] = None,
replication_instance_private_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
replication_instance_public_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
replication_subnet_group_id: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering ReplicationInstance resources.
:param pulumi.Input[int] allocated_storage: The amount of storage (in gigabytes) to be initially allocated for the replication instance.
:param pulumi.Input[bool] allow_major_version_upgrade: Indicates that major version upgrades are allowed.
:param pulumi.Input[bool] apply_immediately: Indicates whether the changes should be applied immediately or during the next maintenance window. Only used when updating an existing resource.
:param pulumi.Input[bool] auto_minor_version_upgrade: Indicates that minor engine upgrades will be applied automatically to the replication instance during the maintenance window.
:param pulumi.Input[str] availability_zone: The EC2 Availability Zone that the replication instance will be created in.
:param pulumi.Input[str] engine_version: The engine version number of the replication instance.
:param pulumi.Input[str] kms_key_arn: The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.
:param pulumi.Input[bool] multi_az: Specifies if the replication instance is a multi-az deployment. You cannot set the `availability_zone` parameter if the `multi_az` parameter is set to `true`.
:param pulumi.Input[str] preferred_maintenance_window: The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
:param pulumi.Input[bool] publicly_accessible: Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address.
:param pulumi.Input[str] replication_instance_arn: The Amazon Resource Name (ARN) of the replication instance.
:param pulumi.Input[str] replication_instance_class: The compute and memory capacity of the replication instance as specified by the replication instance class. See [AWS DMS User Guide](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html) for available instance sizes and advice on which one to choose.
:param pulumi.Input[str] replication_instance_id: The replication instance identifier. This parameter is stored as a lowercase string.
:param pulumi.Input[Sequence[pulumi.Input[str]]] replication_instance_private_ips: A list of the private IP addresses of the replication instance.
:param pulumi.Input[Sequence[pulumi.Input[str]]] replication_instance_public_ips: A list of the public IP addresses of the replication instance.
:param pulumi.Input[str] replication_subnet_group_id: A subnet group to associate with the replication instance.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider .
:param pulumi.Input[Sequence[pulumi.Input[str]]] vpc_security_group_ids: A list of VPC security group IDs to be used with the replication instance. The VPC security groups must work with the VPC containing the replication instance.
"""
if allocated_storage is not None:
pulumi.set(__self__, "allocated_storage", allocated_storage)
if allow_major_version_upgrade is not None:
pulumi.set(__self__, "allow_major_version_upgrade", allow_major_version_upgrade)
if apply_immediately is not None:
pulumi.set(__self__, "apply_immediately", apply_immediately)
if auto_minor_version_upgrade is not None:
pulumi.set(__self__, "auto_minor_version_upgrade", auto_minor_version_upgrade)
if availability_zone is not None:
pulumi.set(__self__, "availability_zone", availability_zone)
if engine_version is not None:
pulumi.set(__self__, "engine_version", engine_version)
if kms_key_arn is not None:
pulumi.set(__self__, "kms_key_arn", kms_key_arn)
if multi_az is not None:
pulumi.set(__self__, "multi_az", multi_az)
if preferred_maintenance_window is not None:
pulumi.set(__self__, "preferred_maintenance_window", preferred_maintenance_window)
if publicly_accessible is not None:
pulumi.set(__self__, "publicly_accessible", publicly_accessible)
if replication_instance_arn is not None:
pulumi.set(__self__, "replication_instance_arn", replication_instance_arn)
if replication_instance_class is not None:
pulumi.set(__self__, "replication_instance_class", replication_instance_class)
if replication_instance_id is not None:
pulumi.set(__self__, "replication_instance_id", replication_instance_id)
if replication_instance_private_ips is not None:
pulumi.set(__self__, "replication_instance_private_ips", replication_instance_private_ips)
if replication_instance_public_ips is not None:
pulumi.set(__self__, "replication_instance_public_ips", replication_instance_public_ips)
if replication_subnet_group_id is not None:
pulumi.set(__self__, "replication_subnet_group_id", replication_subnet_group_id)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if tags_all is not None:
pulumi.set(__self__, "tags_all", tags_all)
if vpc_security_group_ids is not None:
pulumi.set(__self__, "vpc_security_group_ids", vpc_security_group_ids)
@property
@pulumi.getter(name="allocatedStorage")
def allocated_storage(self) -> Optional[pulumi.Input[int]]:
"""
The amount of storage (in gigabytes) to be initially allocated for the replication instance.
"""
return pulumi.get(self, "allocated_storage")
@allocated_storage.setter
def allocated_storage(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "allocated_storage", value)
@property
@pulumi.getter(name="allowMajorVersionUpgrade")
def allow_major_version_upgrade(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates that major version upgrades are allowed.
"""
return pulumi.get(self, "allow_major_version_upgrade")
@allow_major_version_upgrade.setter
def allow_major_version_upgrade(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "allow_major_version_upgrade", value)
@property
@pulumi.getter(name="applyImmediately")
def apply_immediately(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether the changes should be applied immediately or during the next maintenance window. Only used when updating an existing resource.
"""
return pulumi.get(self, "apply_immediately")
@apply_immediately.setter
def apply_immediately(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "apply_immediately", value)
@property
@pulumi.getter(name="autoMinorVersionUpgrade")
def auto_minor_version_upgrade(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates that minor engine upgrades will be applied automatically to the replication instance during the maintenance window.
"""
return pulumi.get(self, "auto_minor_version_upgrade")
@auto_minor_version_upgrade.setter
def auto_minor_version_upgrade(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "auto_minor_version_upgrade", value)
@property
@pulumi.getter(name="availabilityZone")
def availability_zone(self) -> Optional[pulumi.Input[str]]:
"""
The EC2 Availability Zone that the replication instance will be created in.
"""
return pulumi.get(self, "availability_zone")
@availability_zone.setter
def availability_zone(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "availability_zone", value)
@property
@pulumi.getter(name="engineVersion")
def engine_version(self) -> Optional[pulumi.Input[str]]:
"""
The engine version number of the replication instance.
"""
return pulumi.get(self, "engine_version")
@engine_version.setter
def engine_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "engine_version", value)
@property
@pulumi.getter(name="kmsKeyArn")
def kms_key_arn(self) -> Optional[pulumi.Input[str]]:
"""
The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.
"""
return pulumi.get(self, "kms_key_arn")
@kms_key_arn.setter
def kms_key_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kms_key_arn", value)
@property
@pulumi.getter(name="multiAz")
def multi_az(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies if the replication instance is a multi-az deployment. You cannot set the `availability_zone` parameter if the `multi_az` parameter is set to `true`.
"""
return pulumi.get(self, "multi_az")
@multi_az.setter
def multi_az(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "multi_az", value)
@property
@pulumi.getter(name="preferredMaintenanceWindow")
def preferred_maintenance_window(self) -> Optional[pulumi.Input[str]]:
"""
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
"""
return pulumi.get(self, "preferred_maintenance_window")
@preferred_maintenance_window.setter
def preferred_maintenance_window(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "preferred_maintenance_window", value)
@property
@pulumi.getter(name="publiclyAccessible")
def publicly_accessible(self) -> Optional[pulumi.Input[bool]]:
"""
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address.
"""
return pulumi.get(self, "publicly_accessible")
@publicly_accessible.setter
def publicly_accessible(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "publicly_accessible", value)
@property
@pulumi.getter(name="replicationInstanceArn")
def replication_instance_arn(self) -> Optional[pulumi.Input[str]]:
"""
The Amazon Resource Name (ARN) of the replication instance.
"""
return pulumi.get(self, "replication_instance_arn")
@replication_instance_arn.setter
def replication_instance_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "replication_instance_arn", value)
@property
@pulumi.getter(name="replicationInstanceClass")
def replication_instance_class(self) -> Optional[pulumi.Input[str]]:
"""
The compute and memory capacity of the replication instance as specified by the replication instance class. See [AWS DMS User Guide](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html) for available instance sizes and advice on which one to choose.
"""
return pulumi.get(self, "replication_instance_class")
@replication_instance_class.setter
def replication_instance_class(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "replication_instance_class", value)
@property
@pulumi.getter(name="replicationInstanceId")
def replication_instance_id(self) -> Optional[pulumi.Input[str]]:
"""
The replication instance identifier. This parameter is stored as a lowercase string.
"""
return pulumi.get(self, "replication_instance_id")
@replication_instance_id.setter
def replication_instance_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "replication_instance_id", value)
@property
@pulumi.getter(name="replicationInstancePrivateIps")
def replication_instance_private_ips(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of the private IP addresses of the replication instance.
"""
return pulumi.get(self, "replication_instance_private_ips")
@replication_instance_private_ips.setter
def replication_instance_private_ips(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "replication_instance_private_ips", value)
@property
@pulumi.getter(name="replicationInstancePublicIps")
def replication_instance_public_ips(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of the public IP addresses of the replication instance.
"""
return pulumi.get(self, "replication_instance_public_ips")
@replication_instance_public_ips.setter
def replication_instance_public_ips(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "replication_instance_public_ips", value)
@property
@pulumi.getter(name="replicationSubnetGroupId")
def replication_subnet_group_id(self) -> Optional[pulumi.Input[str]]:
"""
A subnet group to associate with the replication instance.
"""
return pulumi.get(self, "replication_subnet_group_id")
@replication_subnet_group_id.setter
def replication_subnet_group_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "replication_subnet_group_id", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags assigned to the resource, including those inherited from the provider .
"""
return pulumi.get(self, "tags_all")
@tags_all.setter
def tags_all(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags_all", value)
@property
@pulumi.getter(name="vpcSecurityGroupIds")
def vpc_security_group_ids(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of VPC security group IDs to be used with the replication instance. The VPC security groups must work with the VPC containing the replication instance.
"""
return pulumi.get(self, "vpc_security_group_ids")
@vpc_security_group_ids.setter
def vpc_security_group_ids(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "vpc_security_group_ids", value)
class ReplicationInstance(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
allocated_storage: Optional[pulumi.Input[int]] = None,
allow_major_version_upgrade: Optional[pulumi.Input[bool]] = None,
apply_immediately: Optional[pulumi.Input[bool]] = None,
auto_minor_version_upgrade: Optional[pulumi.Input[bool]] = None,
availability_zone: Optional[pulumi.Input[str]] = None,
engine_version: Optional[pulumi.Input[str]] = None,
kms_key_arn: Optional[pulumi.Input[str]] = None,
multi_az: Optional[pulumi.Input[bool]] = None,
preferred_maintenance_window: Optional[pulumi.Input[str]] = None,
publicly_accessible: Optional[pulumi.Input[bool]] = None,
replication_instance_class: Optional[pulumi.Input[str]] = None,
replication_instance_id: Optional[pulumi.Input[str]] = None,
replication_subnet_group_id: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
"""
Provides a DMS (Data Migration Service) replication instance resource. DMS replication instances can be created, updated, deleted, and imported.
## Example Usage
Create required roles and then create a DMS instance, setting the depends_on to the required role policy attachments.
```python
import pulumi
import pulumi_aws as aws
dms_assume_role = aws.iam.get_policy_document(statements=[aws.iam.GetPolicyDocumentStatementArgs(
actions=["sts:AssumeRole"],
principals=[aws.iam.GetPolicyDocumentStatementPrincipalArgs(
identifiers=["dms.amazonaws.com"],
type="Service",
)],
)])
dms_access_for_endpoint = aws.iam.Role("dms-access-for-endpoint", assume_role_policy=dms_assume_role.json)
dms_access_for_endpoint__amazon_dms_redshift_s3_role = aws.iam.RolePolicyAttachment("dms-access-for-endpoint-AmazonDMSRedshiftS3Role",
policy_arn="arn:aws:iam::aws:policy/service-role/AmazonDMSRedshiftS3Role",
role=dms_access_for_endpoint.name)
dms_cloudwatch_logs_role = aws.iam.Role("dms-cloudwatch-logs-role", assume_role_policy=dms_assume_role.json)
dms_cloudwatch_logs_role__amazon_dms_cloud_watch_logs_role = aws.iam.RolePolicyAttachment("dms-cloudwatch-logs-role-AmazonDMSCloudWatchLogsRole",
policy_arn="arn:aws:iam::aws:policy/service-role/AmazonDMSCloudWatchLogsRole",
role=dms_cloudwatch_logs_role.name)
dms_vpc_role = aws.iam.Role("dms-vpc-role", assume_role_policy=dms_assume_role.json)
dms_vpc_role__amazon_dmsvpc_management_role = aws.iam.RolePolicyAttachment("dms-vpc-role-AmazonDMSVPCManagementRole",
policy_arn="arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole",
role=dms_vpc_role.name)
# Create a new replication instance
test = aws.dms.ReplicationInstance("test",
allocated_storage=20,
apply_immediately=True,
auto_minor_version_upgrade=True,
availability_zone="us-west-2c",
engine_version="3.1.4",
kms_key_arn="arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012",
multi_az=False,
preferred_maintenance_window="sun:10:30-sun:14:30",
publicly_accessible=True,
replication_instance_class="dms.t2.micro",
replication_instance_id="test-dms-replication-instance-tf",
replication_subnet_group_id=aws_dms_replication_subnet_group["test-dms-replication-subnet-group-tf"]["id"],
tags={
"Name": "test",
},
vpc_security_group_ids=["sg-12345678"],
opts=pulumi.ResourceOptions(depends_on=[
dms_access_for_endpoint__amazon_dms_redshift_s3_role,
dms_cloudwatch_logs_role__amazon_dms_cloud_watch_logs_role,
dms_vpc_role__amazon_dmsvpc_management_role,
]))
```
## Import
Replication instances can be imported using the `replication_instance_id`, e.g.,
```sh
$ pulumi import aws:dms/replicationInstance:ReplicationInstance test test-dms-replication-instance-tf
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[int] allocated_storage: The amount of storage (in gigabytes) to be initially allocated for the replication instance.
:param pulumi.Input[bool] allow_major_version_upgrade: Indicates that major version upgrades are allowed.
:param pulumi.Input[bool] apply_immediately: Indicates whether the changes should be applied immediately or during the next maintenance window. Only used when updating an existing resource.
:param pulumi.Input[bool] auto_minor_version_upgrade: Indicates that minor engine upgrades will be applied automatically to the replication instance during the maintenance window.
:param pulumi.Input[str] availability_zone: The EC2 Availability Zone that the replication instance will be created in.
:param pulumi.Input[str] engine_version: The engine version number of the replication instance.
:param pulumi.Input[str] kms_key_arn: The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.
:param pulumi.Input[bool] multi_az: Specifies if the replication instance is a multi-az deployment. You cannot set the `availability_zone` parameter if the `multi_az` parameter is set to `true`.
:param pulumi.Input[str] preferred_maintenance_window: The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
:param pulumi.Input[bool] publicly_accessible: Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address.
:param pulumi.Input[str] replication_instance_class: The compute and memory capacity of the replication instance as specified by the replication instance class. See [AWS DMS User Guide](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html) for available instance sizes and advice on which one to choose.
:param pulumi.Input[str] replication_instance_id: The replication instance identifier. This parameter is stored as a lowercase string.
:param pulumi.Input[str] replication_subnet_group_id: A subnet group to associate with the replication instance.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Sequence[pulumi.Input[str]]] vpc_security_group_ids: A list of VPC security group IDs to be used with the replication instance. The VPC security groups must work with the VPC containing the replication instance.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ReplicationInstanceArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a DMS (Data Migration Service) replication instance resource. DMS replication instances can be created, updated, deleted, and imported.
## Example Usage
Create required roles and then create a DMS instance, setting the depends_on to the required role policy attachments.
```python
import pulumi
import pulumi_aws as aws
dms_assume_role = aws.iam.get_policy_document(statements=[aws.iam.GetPolicyDocumentStatementArgs(
actions=["sts:AssumeRole"],
principals=[aws.iam.GetPolicyDocumentStatementPrincipalArgs(
identifiers=["dms.amazonaws.com"],
type="Service",
)],
)])
dms_access_for_endpoint = aws.iam.Role("dms-access-for-endpoint", assume_role_policy=dms_assume_role.json)
dms_access_for_endpoint__amazon_dms_redshift_s3_role = aws.iam.RolePolicyAttachment("dms-access-for-endpoint-AmazonDMSRedshiftS3Role",
policy_arn="arn:aws:iam::aws:policy/service-role/AmazonDMSRedshiftS3Role",
role=dms_access_for_endpoint.name)
dms_cloudwatch_logs_role = aws.iam.Role("dms-cloudwatch-logs-role", assume_role_policy=dms_assume_role.json)
dms_cloudwatch_logs_role__amazon_dms_cloud_watch_logs_role = aws.iam.RolePolicyAttachment("dms-cloudwatch-logs-role-AmazonDMSCloudWatchLogsRole",
policy_arn="arn:aws:iam::aws:policy/service-role/AmazonDMSCloudWatchLogsRole",
role=dms_cloudwatch_logs_role.name)
dms_vpc_role = aws.iam.Role("dms-vpc-role", assume_role_policy=dms_assume_role.json)
dms_vpc_role__amazon_dmsvpc_management_role = aws.iam.RolePolicyAttachment("dms-vpc-role-AmazonDMSVPCManagementRole",
policy_arn="arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole",
role=dms_vpc_role.name)
# Create a new replication instance
test = aws.dms.ReplicationInstance("test",
allocated_storage=20,
apply_immediately=True,
auto_minor_version_upgrade=True,
availability_zone="us-west-2c",
engine_version="3.1.4",
kms_key_arn="arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012",
multi_az=False,
preferred_maintenance_window="sun:10:30-sun:14:30",
publicly_accessible=True,
replication_instance_class="dms.t2.micro",
replication_instance_id="test-dms-replication-instance-tf",
replication_subnet_group_id=aws_dms_replication_subnet_group["test-dms-replication-subnet-group-tf"]["id"],
tags={
"Name": "test",
},
vpc_security_group_ids=["sg-12345678"],
opts=pulumi.ResourceOptions(depends_on=[
dms_access_for_endpoint__amazon_dms_redshift_s3_role,
dms_cloudwatch_logs_role__amazon_dms_cloud_watch_logs_role,
dms_vpc_role__amazon_dmsvpc_management_role,
]))
```
## Import
Replication instances can be imported using the `replication_instance_id`, e.g.,
```sh
$ pulumi import aws:dms/replicationInstance:ReplicationInstance test test-dms-replication-instance-tf
```
:param str resource_name: The name of the resource.
:param ReplicationInstanceArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ReplicationInstanceArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
allocated_storage: Optional[pulumi.Input[int]] = None,
allow_major_version_upgrade: Optional[pulumi.Input[bool]] = None,
apply_immediately: Optional[pulumi.Input[bool]] = None,
auto_minor_version_upgrade: Optional[pulumi.Input[bool]] = None,
availability_zone: Optional[pulumi.Input[str]] = None,
engine_version: Optional[pulumi.Input[str]] = None,
kms_key_arn: Optional[pulumi.Input[str]] = None,
multi_az: Optional[pulumi.Input[bool]] = None,
preferred_maintenance_window: Optional[pulumi.Input[str]] = None,
publicly_accessible: Optional[pulumi.Input[bool]] = None,
replication_instance_class: Optional[pulumi.Input[str]] = None,
replication_instance_id: Optional[pulumi.Input[str]] = None,
replication_subnet_group_id: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ReplicationInstanceArgs.__new__(ReplicationInstanceArgs)
__props__.__dict__["allocated_storage"] = allocated_storage
__props__.__dict__["allow_major_version_upgrade"] = allow_major_version_upgrade
__props__.__dict__["apply_immediately"] = apply_immediately
__props__.__dict__["auto_minor_version_upgrade"] = auto_minor_version_upgrade
__props__.__dict__["availability_zone"] = availability_zone
__props__.__dict__["engine_version"] = engine_version
__props__.__dict__["kms_key_arn"] = kms_key_arn
__props__.__dict__["multi_az"] = multi_az
__props__.__dict__["preferred_maintenance_window"] = preferred_maintenance_window
__props__.__dict__["publicly_accessible"] = publicly_accessible
if replication_instance_class is None and not opts.urn:
raise TypeError("Missing required property 'replication_instance_class'")
__props__.__dict__["replication_instance_class"] = replication_instance_class
if replication_instance_id is None and not opts.urn:
raise TypeError("Missing required property 'replication_instance_id'")
__props__.__dict__["replication_instance_id"] = replication_instance_id
__props__.__dict__["replication_subnet_group_id"] = replication_subnet_group_id
__props__.__dict__["tags"] = tags
__props__.__dict__["vpc_security_group_ids"] = vpc_security_group_ids
__props__.__dict__["replication_instance_arn"] = None
__props__.__dict__["replication_instance_private_ips"] = None
__props__.__dict__["replication_instance_public_ips"] = None
__props__.__dict__["tags_all"] = None
super(ReplicationInstance, __self__).__init__(
'aws:dms/replicationInstance:ReplicationInstance',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
allocated_storage: Optional[pulumi.Input[int]] = None,
allow_major_version_upgrade: Optional[pulumi.Input[bool]] = None,
apply_immediately: Optional[pulumi.Input[bool]] = None,
auto_minor_version_upgrade: Optional[pulumi.Input[bool]] = None,
availability_zone: Optional[pulumi.Input[str]] = None,
engine_version: Optional[pulumi.Input[str]] = None,
kms_key_arn: Optional[pulumi.Input[str]] = None,
multi_az: Optional[pulumi.Input[bool]] = None,
preferred_maintenance_window: Optional[pulumi.Input[str]] = None,
publicly_accessible: Optional[pulumi.Input[bool]] = None,
replication_instance_arn: Optional[pulumi.Input[str]] = None,
replication_instance_class: Optional[pulumi.Input[str]] = None,
replication_instance_id: Optional[pulumi.Input[str]] = None,
replication_instance_private_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
replication_instance_public_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
replication_subnet_group_id: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
vpc_security_group_ids: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None) -> 'ReplicationInstance':
"""
Get an existing ReplicationInstance resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[int] allocated_storage: The amount of storage (in gigabytes) to be initially allocated for the replication instance.
:param pulumi.Input[bool] allow_major_version_upgrade: Indicates that major version upgrades are allowed.
:param pulumi.Input[bool] apply_immediately: Indicates whether the changes should be applied immediately or during the next maintenance window. Only used when updating an existing resource.
:param pulumi.Input[bool] auto_minor_version_upgrade: Indicates that minor engine upgrades will be applied automatically to the replication instance during the maintenance window.
:param pulumi.Input[str] availability_zone: The EC2 Availability Zone that the replication instance will be created in.
:param pulumi.Input[str] engine_version: The engine version number of the replication instance.
:param pulumi.Input[str] kms_key_arn: The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.
:param pulumi.Input[bool] multi_az: Specifies if the replication instance is a multi-az deployment. You cannot set the `availability_zone` parameter if the `multi_az` parameter is set to `true`.
:param pulumi.Input[str] preferred_maintenance_window: The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
:param pulumi.Input[bool] publicly_accessible: Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address.
:param pulumi.Input[str] replication_instance_arn: The Amazon Resource Name (ARN) of the replication instance.
:param pulumi.Input[str] replication_instance_class: The compute and memory capacity of the replication instance as specified by the replication instance class. See [AWS DMS User Guide](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html) for available instance sizes and advice on which one to choose.
:param pulumi.Input[str] replication_instance_id: The replication instance identifier. This parameter is stored as a lowercase string.
:param pulumi.Input[Sequence[pulumi.Input[str]]] replication_instance_private_ips: A list of the private IP addresses of the replication instance.
:param pulumi.Input[Sequence[pulumi.Input[str]]] replication_instance_public_ips: A list of the public IP addresses of the replication instance.
:param pulumi.Input[str] replication_subnet_group_id: A subnet group to associate with the replication instance.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider .
:param pulumi.Input[Sequence[pulumi.Input[str]]] vpc_security_group_ids: A list of VPC security group IDs to be used with the replication instance. The VPC security groups must work with the VPC containing the replication instance.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ReplicationInstanceState.__new__(_ReplicationInstanceState)
__props__.__dict__["allocated_storage"] = allocated_storage
__props__.__dict__["allow_major_version_upgrade"] = allow_major_version_upgrade
__props__.__dict__["apply_immediately"] = apply_immediately
__props__.__dict__["auto_minor_version_upgrade"] = auto_minor_version_upgrade
__props__.__dict__["availability_zone"] = availability_zone
__props__.__dict__["engine_version"] = engine_version
__props__.__dict__["kms_key_arn"] = kms_key_arn
__props__.__dict__["multi_az"] = multi_az
__props__.__dict__["preferred_maintenance_window"] = preferred_maintenance_window
__props__.__dict__["publicly_accessible"] = publicly_accessible
__props__.__dict__["replication_instance_arn"] = replication_instance_arn
__props__.__dict__["replication_instance_class"] = replication_instance_class
__props__.__dict__["replication_instance_id"] = replication_instance_id
__props__.__dict__["replication_instance_private_ips"] = replication_instance_private_ips
__props__.__dict__["replication_instance_public_ips"] = replication_instance_public_ips
__props__.__dict__["replication_subnet_group_id"] = replication_subnet_group_id
__props__.__dict__["tags"] = tags
__props__.__dict__["tags_all"] = tags_all
__props__.__dict__["vpc_security_group_ids"] = vpc_security_group_ids
return ReplicationInstance(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="allocatedStorage")
def allocated_storage(self) -> pulumi.Output[int]:
"""
The amount of storage (in gigabytes) to be initially allocated for the replication instance.
"""
return pulumi.get(self, "allocated_storage")
@property
@pulumi.getter(name="allowMajorVersionUpgrade")
def allow_major_version_upgrade(self) -> pulumi.Output[Optional[bool]]:
"""
Indicates that major version upgrades are allowed.
"""
return pulumi.get(self, "allow_major_version_upgrade")
@property
@pulumi.getter(name="applyImmediately")
def apply_immediately(self) -> pulumi.Output[Optional[bool]]:
"""
Indicates whether the changes should be applied immediately or during the next maintenance window. Only used when updating an existing resource.
"""
return pulumi.get(self, "apply_immediately")
@property
@pulumi.getter(name="autoMinorVersionUpgrade")
def auto_minor_version_upgrade(self) -> pulumi.Output[bool]:
"""
Indicates that minor engine upgrades will be applied automatically to the replication instance during the maintenance window.
"""
return pulumi.get(self, "auto_minor_version_upgrade")
@property
@pulumi.getter(name="availabilityZone")
def availability_zone(self) -> pulumi.Output[str]:
"""
The EC2 Availability Zone that the replication instance will be created in.
"""
return pulumi.get(self, "availability_zone")
@property
@pulumi.getter(name="engineVersion")
def engine_version(self) -> pulumi.Output[str]:
"""
The engine version number of the replication instance.
"""
return pulumi.get(self, "engine_version")
@property
@pulumi.getter(name="kmsKeyArn")
def kms_key_arn(self) -> pulumi.Output[str]:
"""
The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.
"""
return pulumi.get(self, "kms_key_arn")
@property
@pulumi.getter(name="multiAz")
def multi_az(self) -> pulumi.Output[bool]:
"""
Specifies if the replication instance is a multi-az deployment. You cannot set the `availability_zone` parameter if the `multi_az` parameter is set to `true`.
"""
return pulumi.get(self, "multi_az")
@property
@pulumi.getter(name="preferredMaintenanceWindow")
def preferred_maintenance_window(self) -> pulumi.Output[str]:
"""
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
"""
return pulumi.get(self, "preferred_maintenance_window")
@property
@pulumi.getter(name="publiclyAccessible")
def publicly_accessible(self) -> pulumi.Output[bool]:
"""
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address.
"""
return pulumi.get(self, "publicly_accessible")
@property
@pulumi.getter(name="replicationInstanceArn")
def replication_instance_arn(self) -> pulumi.Output[str]:
"""
The Amazon Resource Name (ARN) of the replication instance.
"""
return pulumi.get(self, "replication_instance_arn")
@property
@pulumi.getter(name="replicationInstanceClass")
def replication_instance_class(self) -> pulumi.Output[str]:
"""
The compute and memory capacity of the replication instance as specified by the replication instance class. See [AWS DMS User Guide](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html) for available instance sizes and advice on which one to choose.
"""
return pulumi.get(self, "replication_instance_class")
@property
@pulumi.getter(name="replicationInstanceId")
def replication_instance_id(self) -> pulumi.Output[str]:
"""
The replication instance identifier. This parameter is stored as a lowercase string.
"""
return pulumi.get(self, "replication_instance_id")
@property
@pulumi.getter(name="replicationInstancePrivateIps")
def replication_instance_private_ips(self) -> pulumi.Output[Sequence[str]]:
"""
A list of the private IP addresses of the replication instance.
"""
return pulumi.get(self, "replication_instance_private_ips")
@property
@pulumi.getter(name="replicationInstancePublicIps")
def replication_instance_public_ips(self) -> pulumi.Output[Sequence[str]]:
"""
A list of the public IP addresses of the replication instance.
"""
return pulumi.get(self, "replication_instance_public_ips")
@property
@pulumi.getter(name="replicationSubnetGroupId")
def replication_subnet_group_id(self) -> pulumi.Output[str]:
"""
A subnet group to associate with the replication instance.
"""
return pulumi.get(self, "replication_subnet_group_id")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> pulumi.Output[Mapping[str, str]]:
"""
A map of tags assigned to the resource, including those inherited from the provider .
"""
return pulumi.get(self, "tags_all")
@property
@pulumi.getter(name="vpcSecurityGroupIds")
def vpc_security_group_ids(self) -> pulumi.Output[Sequence[str]]:
"""
A list of VPC security group IDs to be used with the replication instance. The VPC security groups must work with the VPC containing the replication instance.
"""
return pulumi.get(self, "vpc_security_group_ids")
| 58.921154 | 390 | 0.707203 | 7,541 | 61,278 | 5.50431 | 0.04628 | 0.070757 | 0.066373 | 0.032331 | 0.953792 | 0.944734 | 0.937024 | 0.931483 | 0.924328 | 0.907102 | 0 | 0.003123 | 0.205816 | 61,278 | 1,039 | 391 | 58.977863 | 0.849792 | 0.428751 | 0 | 0.809609 | 1 | 0 | 0.13831 | 0.088191 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16726 | false | 0.001779 | 0.008897 | 0 | 0.27758 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7d9327a699585b2b7cb0771861d608e4c0a38cf3 | 160 | py | Python | ramda/times_test.py | Rafi993/pyramda | 4fa7fe28d5eaa798b702d28bdd3948515cb88f48 | [
"MIT"
] | 56 | 2018-08-06T08:44:58.000Z | 2022-03-17T09:49:03.000Z | ramda/times_test.py | Rafi993/pyramda | 4fa7fe28d5eaa798b702d28bdd3948515cb88f48 | [
"MIT"
] | 28 | 2019-06-17T11:09:52.000Z | 2022-02-18T16:59:21.000Z | ramda/times_test.py | slavaGanzin/pyramda | 4fa7fe28d5eaa798b702d28bdd3948515cb88f48 | [
"MIT"
] | 5 | 2019-09-18T09:24:38.000Z | 2021-07-21T08:40:23.000Z | from .times import times
from ramda.private.asserts import assert_iterables_equal
def uniq_nocurry_test():
assert_iterables_equal(times(5), [1, 2, 3, 4])
| 22.857143 | 56 | 0.76875 | 25 | 160 | 4.68 | 0.72 | 0.25641 | 0.34188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035971 | 0.13125 | 160 | 6 | 57 | 26.666667 | 0.805755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
7db16939e599cf042f812e5dd77b74908615530e | 13,870 | py | Python | rlzoo/algorithms/dppo/default.py | tensorlayer/RLzoo | 9a587b97f706b2a59ac98555945822bf3987b1d1 | [
"Apache-2.0"
] | 750 | 2019-07-26T10:56:28.000Z | 2022-03-25T08:36:38.000Z | rlzoo/algorithms/dppo/default.py | tensorlayer/RLzoo | 9a587b97f706b2a59ac98555945822bf3987b1d1 | [
"Apache-2.0"
] | 29 | 2019-10-25T02:46:19.000Z | 2022-02-09T23:41:24.000Z | rlzoo/algorithms/dppo/default.py | tensorlayer/RLzoo | 9a587b97f706b2a59ac98555945822bf3987b1d1 | [
"Apache-2.0"
] | 101 | 2019-08-04T12:21:25.000Z | 2022-03-18T18:06:50.000Z | from rlzoo.common.policy_networks import *
from rlzoo.common.value_networks import *
from rlzoo.common.utils import set_seed
"""
full list of algorithm parameters (alg_params)
-----------------------------------------------
net_list: a list of networks (value and policy) used in the algorithm, from common functions or customization
optimizers_list: a list of optimizers for all networks and differentiable variables
epsilon: clip parameter (for method 'clip')
kl_target: controls bounds of policy update and adaptive lambda (for method 'penalty')
lam: KL-regularization coefficient (for method 'penalty')
-----------------------------------------------
full list of learning parameters (learn_params)
-----------------------------------------------
train_episodes: total number of episodes for training
test_episodes: total number of episodes for testing
max_steps: maximum number of steps for one episode
save_interval: time steps for saving
gamma: reward discount factor
mode: train or test
batch_size: update batch size
a_update_steps: actor update iteration steps
c_update_steps: critic update iteration steps
n_worker: number of workers
-----------------------------------------------
"""
def atari(env, default_seed=True):
if default_seed:
assert isinstance(env, list)
seed = np.arange(len(env)).tolist() # a list of seeds for each env
set_seed(seed, env) # reproducible
# for multi-threading
if isinstance(env, list): # judge if multiple envs are passed in for parallel computing
num_env = len(env) # number of envs passed in
env = env[0] # take one of the env as they are all the same
else:
num_env = 1
alg_params = dict(method='clip', # method can be clip or penalty
epsilon=0.2, # for method 'clip'
kl_target=0.01, # for method 'penalty'
lam=0.5 # for method 'penalty'
)
if alg_params.get('net_list') is None:
num_hidden_layer = 2 # number of hidden layers for the networks
hidden_dim = 64 # dimension of hidden layers for the networks
with tf.name_scope('DPPO'):
with tf.name_scope('V_Net'):
v_net = ValueNetwork(env.observation_space, [hidden_dim] * num_hidden_layer)
with tf.name_scope('Policy'):
policy_net = StochasticPolicyNetwork(env.observation_space, env.action_space,
[hidden_dim] * num_hidden_layer)
net_list = v_net, policy_net
alg_params['net_list'] = net_list
if alg_params.get('optimizers_list') is None:
actor_lr = 1e-4
critic_lr = 2e-4
optimizers_list = [tf.optimizers.Adam(critic_lr), tf.optimizers.Adam(actor_lr)]
alg_params['optimizers_list'] = optimizers_list
learn_params = dict(train_episodes=1000,
test_episodes=100,
max_steps=200,
save_interval=50,
gamma=0.9,
a_update_steps=10,
c_update_steps=10,
n_workers=num_env,
batch_size=32)
return alg_params, learn_params
def classic_control(env, default_seed=True):
if default_seed:
assert isinstance(env, list)
seed = np.arange(len(env)).tolist() # a list of seeds for each env
set_seed(seed, env) # reproducible
# for multi-threading
if isinstance(env, list): # judge if multiple envs are passed in for parallel computing
num_env = len(env) # number of envs passed in
env = env[0] # take one of the env as they are all the same
else:
num_env = 1
alg_params = dict(method='clip', # method can be clip or penalty
epsilon=0.2, # for method 'clip'
kl_target=0.01, # for method 'penalty'
lam=0.5 # for method 'penalty'
)
if alg_params.get('net_list') is None:
num_hidden_layer = 2 # number of hidden layers for the networks
hidden_dim = 64 # dimension of hidden layers for the networks
with tf.name_scope('DPPO'):
with tf.name_scope('V_Net'):
v_net = ValueNetwork(env.observation_space, [hidden_dim] * num_hidden_layer)
with tf.name_scope('Policy'):
policy_net = StochasticPolicyNetwork(env.observation_space, env.action_space,
[hidden_dim] * num_hidden_layer)
net_list = v_net, policy_net
alg_params['net_list'] = net_list
if alg_params.get('optimizers_list') is None:
actor_lr = 1e-4
critic_lr = 2e-4
optimizers_list = [tf.optimizers.Adam(critic_lr), tf.optimizers.Adam(actor_lr)]
alg_params['optimizers_list'] = optimizers_list
learn_params = dict(train_episodes=1000,
test_episodes=100,
max_steps=200,
save_interval=50,
gamma=0.9,
a_update_steps=10,
c_update_steps=10,
n_workers=num_env,
batch_size=32)
return alg_params, learn_params
def box2d(env, default_seed=True):
if default_seed:
assert isinstance(env, list)
seed = np.arange(len(env)).tolist() # a list of seeds for each env
set_seed(seed, env) # reproducible
# for multi-threading
if isinstance(env, list): # judge if multiple envs are passed in for parallel computing
num_env = len(env) # number of envs passed in
env = env[0] # take one of the env as they are all the same
else:
num_env = 1
alg_params = dict(method='clip', # method can be clip or penalty
epsilon=0.2, # for method 'clip'
kl_target=0.01, # for method 'penalty'
lam=0.5 # for method 'penalty'
)
if alg_params.get('net_list') is None:
num_hidden_layer = 2 # number of hidden layers for the networks
hidden_dim = 64 # dimension of hidden layers for the networks
with tf.name_scope('DPPO'):
with tf.name_scope('V_Net'):
v_net = ValueNetwork(env.observation_space, [hidden_dim] * num_hidden_layer)
with tf.name_scope('Policy'):
policy_net = StochasticPolicyNetwork(env.observation_space, env.action_space,
[hidden_dim] * num_hidden_layer)
net_list = v_net, policy_net
alg_params['net_list'] = net_list
if alg_params.get('optimizers_list') is None:
actor_lr = 1e-4
critic_lr = 2e-4
optimizers_list = [tf.optimizers.Adam(critic_lr), tf.optimizers.Adam(actor_lr)]
alg_params['optimizers_list'] = optimizers_list
learn_params = dict(train_episodes=1000,
test_episodes=100,
max_steps=200,
save_interval=50,
gamma=0.9,
a_update_steps=10,
c_update_steps=10,
n_workers=num_env,
batch_size=32)
return alg_params, learn_params
def mujoco(env, default_seed=True):
if default_seed:
assert isinstance(env, list)
seed = np.arange(len(env)).tolist() # a list of seeds for each env
set_seed(seed, env) # reproducible
# for multi-threading
if isinstance(env, list): # judge if multiple envs are passed in for parallel computing
num_env = len(env) # number of envs passed in
env = env[0] # take one of the env as they are all the same
else:
num_env = 1
alg_params = dict(method='clip', # method can be clip or penalty
epsilon=0.2, # for method 'clip'
kl_target=0.01, # for method 'penalty'
lam=0.5 # for method 'penalty'
)
if alg_params.get('net_list') is None:
num_hidden_layer = 2 # number of hidden layers for the networks
hidden_dim = 64 # dimension of hidden layers for the networks
with tf.name_scope('DPPO'):
with tf.name_scope('V_Net'):
v_net = ValueNetwork(env.observation_space, [hidden_dim] * num_hidden_layer)
with tf.name_scope('Policy'):
policy_net = StochasticPolicyNetwork(env.observation_space, env.action_space,
[hidden_dim] * num_hidden_layer)
net_list = v_net, policy_net
alg_params['net_list'] = net_list
if alg_params.get('optimizers_list') is None:
actor_lr = 1e-4
critic_lr = 2e-4
optimizers_list = [tf.optimizers.Adam(critic_lr), tf.optimizers.Adam(actor_lr)]
alg_params['optimizers_list'] = optimizers_list
learn_params = dict(train_episodes=1000,
test_episodes=100,
max_steps=200,
save_interval=50,
gamma=0.9,
a_update_steps=10,
c_update_steps=10,
n_workers=num_env,
batch_size=32)
return alg_params, learn_params
def robotics(env, default_seed=True):
if default_seed:
assert isinstance(env, list)
seed = np.arange(len(env)).tolist() # a list of seeds for each env
set_seed(seed, env) # reproducible
# for multi-threading
if isinstance(env, list): # judge if multiple envs are passed in for parallel computing
num_env = len(env) # number of envs passed in
env = env[0] # take one of the env as they are all the same
else:
num_env = 1
alg_params = dict(method='clip', # method can be clip or penalty
epsilon=0.2, # for method 'clip'
kl_target=0.01, # for method 'penalty'
lam=0.5 # for method 'penalty'
)
if alg_params.get('net_list') is None:
num_hidden_layer = 2 # number of hidden layers for the networks
hidden_dim = 64 # dimension of hidden layers for the networks
with tf.name_scope('DPPO'):
with tf.name_scope('V_Net'):
v_net = ValueNetwork(env.observation_space, [hidden_dim] * num_hidden_layer)
with tf.name_scope('Policy'):
policy_net = StochasticPolicyNetwork(env.observation_space, env.action_space,
[hidden_dim] * num_hidden_layer)
net_list = v_net, policy_net
alg_params['net_list'] = net_list
if alg_params.get('optimizers_list') is None:
actor_lr = 1e-4
critic_lr = 2e-4
optimizers_list = [tf.optimizers.Adam(critic_lr), tf.optimizers.Adam(actor_lr)]
alg_params['optimizers_list'] = optimizers_list
learn_params = dict(train_episodes=1000,
test_episodes=100,
max_steps=200,
save_interval=50,
gamma=0.9,
a_update_steps=10,
c_update_steps=10,
n_workers=num_env,
batch_size=32)
return alg_params, learn_params
def dm_control(env, default_seed=True):
if default_seed:
assert isinstance(env, list)
seed = np.arange(len(env)).tolist() # a list of seeds for each env
set_seed(seed, env) # reproducible
# for multi-threading
if isinstance(env, list): # judge if multiple envs are passed in for parallel computing
num_env = len(env) # number of envs passed in
env = env[0] # take one of the env as they are all the same
else:
num_env = 1
alg_params = dict(method='clip', # method can be clip or penalty
epsilon=0.2, # for method 'clip'
kl_target=0.01, # for method 'penalty'
lam=0.5 # for method 'penalty'
)
if alg_params.get('net_list') is None:
num_hidden_layer = 2 # number of hidden layers for the networks
hidden_dim = 64 # dimension of hidden layers for the networks
with tf.name_scope('DPPO'):
with tf.name_scope('V_Net'):
v_net = ValueNetwork(env.observation_space, [hidden_dim] * num_hidden_layer)
with tf.name_scope('Policy'):
policy_net = StochasticPolicyNetwork(env.observation_space, env.action_space,
[hidden_dim] * num_hidden_layer)
net_list = v_net, policy_net
alg_params['net_list'] = net_list
if alg_params.get('optimizers_list') is None:
actor_lr = 1e-4
critic_lr = 2e-4
optimizers_list = [tf.optimizers.Adam(critic_lr), tf.optimizers.Adam(actor_lr)]
alg_params['optimizers_list'] = optimizers_list
learn_params = dict(train_episodes=1000,
test_episodes=100,
max_steps=200,
save_interval=50,
gamma=0.9,
a_update_steps=10,
c_update_steps=10,
n_workers=num_env,
batch_size=32)
return alg_params, learn_params
| 41.402985 | 110 | 0.563014 | 1,708 | 13,870 | 4.36007 | 0.090749 | 0.044716 | 0.033839 | 0.036256 | 0.906137 | 0.895528 | 0.886934 | 0.886934 | 0.886934 | 0.886934 | 0 | 0.023983 | 0.347657 | 13,870 | 334 | 111 | 41.526946 | 0.799072 | 0.158688 | 0 | 0.939759 | 0 | 0 | 0.038295 | 0 | 0 | 0 | 0 | 0 | 0.024096 | 1 | 0.024096 | false | 0 | 0.012048 | 0 | 0.060241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7ddb253bfc1a696b6ed057ad5b328b76a7c6bc76 | 37,706 | py | Python | src/tests/featurization/expected/featurization_expected_grover.py | panpiort8/huggingmolecules-1 | 7caf9bb355db86a0d0e8423088c4328770b4db0d | [
"Apache-2.0"
] | 1 | 2021-11-04T03:06:08.000Z | 2021-11-04T03:06:08.000Z | src/tests/featurization/expected/featurization_expected_grover.py | gabegomes/huggingmolecules | adc581c97fbc21d9967dd9334afa94b22fb77651 | [
"Apache-2.0"
] | null | null | null | src/tests/featurization/expected/featurization_expected_grover.py | gabegomes/huggingmolecules | adc581c97fbc21d9967dd9334afa94b22fb77651 | [
"Apache-2.0"
] | null | null | null | from huggingmolecules.featurization.featurization_grover import GroverMoleculeEncoding, GroverBatchEncoding
from torch import LongTensor, FloatTensor
expected_encoded_smiles = [
GroverMoleculeEncoding(
f_atoms=[
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0.12011, 0, 0, 0, 1, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.12011, 0, 1, 0, 0, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.12011, 0, 1, 0, 0, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0.12011, 0, 0, 0, 1, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False]],
f_bonds=[
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0.12011, 0, 0, 0, 1, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False, 0, True, False, False, False, False,
False, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.12011, 0, 1, 0, 0, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False, 0, True, False, False, False, False,
False, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.12011, 0, 1, 0, 0, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False, 0, False, True, False, False, False,
False, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.12011, 0, 1, 0, 0, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False, 0, False, True, False, False, False,
False, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.12011, 0, 1, 0, 0, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False, 0, True, False, False, False, False,
False, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0.12011, 0, 0, 0, 1, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False, 0, True, False, False, False, False,
False, 1, 0, 0, 0, 0, 0, 0]],
a2b=[[1], [0, 3], [2, 5], [4]],
b2a=[0, 1, 1, 2, 2, 3],
b2revb=[1, 0, 3, 2, 5, 4],
n_atoms=4, n_bonds=6, y=None),
GroverMoleculeEncoding(
f_atoms=[
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.12011, 1, 0, 0, 0, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.15999000000000002, 1, 0, 0, 0, 0,
0, 0, 0, True, False, False, False, False, False, False, False, False, False]],
f_bonds=[
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.12011, 1, 0, 0, 0, 0, 0, 0, 0,
False, False, False, False, False, False, False, False, False, False, 0, False, True, False, False, False,
False, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0.15999000000000002, 1, 0, 0, 0, 0,
0, 0, 0, True, False, False, False, False, False, False, False, False, False, 0, False, True, False, False,
False, False, 1, 0, 0, 0, 0, 0, 0]],
a2b=[[1], [0]],
b2a=[0, 1],
b2revb=[1, 0],
n_atoms=2, n_bonds=2, y=None)
]
expected_batch = GroverBatchEncoding(
f_atoms=FloatTensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 1.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1600, 1.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]),
f_bonds=FloatTensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1201, 1.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.1600, 1.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000]]),
a2b=LongTensor([[0, 0],
[2, 0],
[1, 4],
[3, 6],
[5, 0],
[8, 0],
[7, 0]]),
b2a=LongTensor([0, 1, 2, 2, 3, 3, 4, 5, 6]),
b2revb=LongTensor([0, 2, 1, 4, 3, 6, 5, 8, 7]),
a2a=LongTensor([[0, 0],
[2, 0],
[1, 3],
[2, 4],
[3, 0],
[6, 0],
[5, 0]]),
a_scope=LongTensor([[1, 4],
[5, 2]]),
b_scope=LongTensor([[1, 6],
[7, 2]]), y=None, batch_size=2)
| 91.965854 | 120 | 0.432849 | 7,478 | 37,706 | 2.180262 | 0.005884 | 0.740309 | 1.324828 | 1.40579 | 0.976509 | 0.975957 | 0.973503 | 0.973503 | 0.973503 | 0.973503 | 0 | 0.624138 | 0.36538 | 37,706 | 409 | 121 | 92.190709 | 0.057211 | 0 | 0 | 0.90172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.004914 | 0 | 0.004914 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
7ddd2a658150dff037f302d4a8b6fa1103cc7e34 | 137 | py | Python | spectral_similarity/__init__.py | hechth/Daphnis | 0699f720b11b6ec0d4b82f96f7d3f9f88a617578 | [
"Apache-2.0"
] | null | null | null | spectral_similarity/__init__.py | hechth/Daphnis | 0699f720b11b6ec0d4b82f96f7d3f9f88a617578 | [
"Apache-2.0"
] | 2 | 2021-03-08T13:21:49.000Z | 2021-03-11T10:19:59.000Z | spectral_similarity/__init__.py | hechth/Daphnis | 0699f720b11b6ec0d4b82f96f7d3f9f88a617578 | [
"Apache-2.0"
] | null | null | null | try:
from spectral_similarity.spectral_similarity import *
except:
from Daphnis.spectral_similarity.spectral_similarity import *
| 27.4 | 65 | 0.817518 | 15 | 137 | 7.2 | 0.466667 | 0.666667 | 0.481481 | 0.666667 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131387 | 137 | 4 | 66 | 34.25 | 0.907563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
7de4d5d751c870b3bbc7af3d58af320c72102bbb | 1,507 | py | Python | chemicalc/tests/test_reference_spectra.py | NathanSandford/Chem-I-Calc | 34ec9b9e6c23a7d55f64b20de3b17547e1471dfd | [
"MIT"
] | 4 | 2020-06-18T15:38:48.000Z | 2021-04-09T04:49:16.000Z | chemicalc/tests/test_reference_spectra.py | NathanSandford/Chem-I-Calc | 34ec9b9e6c23a7d55f64b20de3b17547e1471dfd | [
"MIT"
] | 19 | 2019-08-02T15:13:35.000Z | 2020-06-22T16:18:15.000Z | chemicalc/tests/test_reference_spectra.py | NathanSandford/Chem-I-Calc | 34ec9b9e6c23a7d55f64b20de3b17547e1471dfd | [
"MIT"
] | 4 | 2019-09-16T23:10:20.000Z | 2021-05-10T08:19:42.000Z | import pytest
import chemicalc.reference_spectra as ref
from chemicalc import utils as u
# ToDo: Clean up with @pytest.mark.parametrize()
def test_alpha_el():
assert isinstance(ref.alpha_el, list)
assert ref.alpha_el == u.alpha_el
for el in ref.alpha_el:
assert el in ref.elements_included
@pytest.mark.skip(reason="Test not implemented")
def test_RefSpec_init():
# ToDo: Implement Tests
star = ref.ReferenceSpectra(reference="RGB_m1.5")
pass
@pytest.mark.skip(reason="Test not implemented")
def test_RefSpec_addrv():
# ToDo: Implement Tests
#star = ref.ReferenceSpectra(reference="RGB_m1.5")
pass
@pytest.mark.skip(reason="Test not implemented")
def test_RefSpec_convolve():
# ToDo: Implement Tests
#star = ref.ReferenceSpectra(reference="RGB_m1.5")
pass
@pytest.mark.skip(reason="Test not implemented")
def test_RefSpec_calcgrad():
# ToDo: Implement Tests
#star = ref.ReferenceSpectra(reference="RGB_m1.5")
pass
@pytest.mark.skip(reason="Test not implemented")
def test_RefSpec_zerograd():
# ToDo: Implement Tests
#star = ref.ReferenceSpectra(reference="RGB_m1.5")
pass
@pytest.mark.skip(reason="Test not implemented")
def test_RefSpec_getnames():
# ToDo: Implement Tests
#star = ref.ReferenceSpectra(reference="RGB_m1.5")
pass
@pytest.mark.skip(reason="Test not implemented")
def test_RefSpec_reset():
# ToDo: Implement Tests
#star = ref.ReferenceSpectra(reference="RGB_m1.5")
pass
| 23.920635 | 54 | 0.71931 | 204 | 1,507 | 5.171569 | 0.230392 | 0.075829 | 0.092891 | 0.132701 | 0.743128 | 0.743128 | 0.743128 | 0.743128 | 0.743128 | 0.743128 | 0 | 0.01112 | 0.164565 | 1,507 | 62 | 55 | 24.306452 | 0.826847 | 0.327804 | 0 | 0.466667 | 0 | 0 | 0.148297 | 0 | 0 | 0 | 0 | 0.016129 | 0.1 | 1 | 0.266667 | false | 0.233333 | 0.1 | 0 | 0.366667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
81631448186b5db62e17f70008937908d0ddfb57 | 11,450 | py | Python | longestNonRepeating.py | bourneagain/pythonBytes | be115162147e52718aacbfb9cd2763aa02754f28 | [
"MIT"
] | 1 | 2017-05-29T02:02:27.000Z | 2017-05-29T02:02:27.000Z | longestNonRepeating.py | bourneagain/pythonBytes | be115162147e52718aacbfb9cd2763aa02754f28 | [
"MIT"
] | null | null | null | longestNonRepeating.py | bourneagain/pythonBytes | be115162147e52718aacbfb9cd2763aa02754f28 | [
"MIT"
] | null | null | null | class Solution:
# @return an integer
def lengthOfLongestSubstring(self, s):
"""
working
"""
result = 0
i = 0
ptr = 0
flag = True
while i < len(s):
if flag:
while ptr < len(s):
if s[ptr] not in s[i:ptr]:
ptr += 1
else:
result = max(result, ptr-i)
flag = False
break
if ptr == len(s):
break
if s[i] == s[ptr]:
flag = True
i += 1
return max(result, ptr-i)
def lengthOfLongestSubstring2(self, s):
i=0
m={}
n=len(s)
j=0
max_len=0
while j < n:
#print s[j]
if s[j] in m:
max_len=max(max_len, j-i)
while s[i] != s[j]:
m[s[i]] = False
i+=1
i+=1
j+=1
else:
m[s[j]] = True
j+=1
return max(max_len,n-i)
import time
x=Solution()
#a=r"""abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ abcdefghijklmnopqrstuvw"""
a="wlrbbmqbhcdarzowkkyhiddqscdxrjmowfrxsjybldbefsarcbynecdyggxxpklorellnmpapqfwkhopkmco"
st=time.time()
print x.lengthOfLongestSubstring(a)
et=(time.time()-st)*10e5
print et
st=time.time()
print x.lengthOfLongestSubstring2(a)
et=(time.time()-st)*10e5
print et | 184.677419 | 10,008 | 0.632489 | 269 | 11,450 | 26.516729 | 0.156134 | 1.807935 | 2.685826 | 3.546334 | 0.923875 | 0.919389 | 0.919389 | 0.919389 | 0.912659 | 0.912659 | 0 | 0.100403 | 0.069258 | 11,450 | 62 | 10,009 | 184.677419 | 0.568922 | 0.867249 | 0 | 0.38 | 0 | 0 | 0.06087 | 0.06087 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.02 | null | null | 0.08 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
817de5f55d10fb287617cbfff5f583c1afba182d | 33 | py | Python | cases/floorDivEqual.py | minakoyang/YY_python2.7_interpreter_in_CPP | e949f4bbd27752e6dbfef0a887d9567345d512f4 | [
"MIT"
] | 1 | 2019-04-30T16:27:19.000Z | 2019-04-30T16:27:19.000Z | cases/floorDivEqual.py | minakoyang/YY_python2.7_interpreter_in_CPP | e949f4bbd27752e6dbfef0a887d9567345d512f4 | [
"MIT"
] | null | null | null | cases/floorDivEqual.py | minakoyang/YY_python2.7_interpreter_in_CPP | e949f4bbd27752e6dbfef0a887d9567345d512f4 | [
"MIT"
] | null | null | null | x = 32
y = 3.42
x //= y
print x
| 6.6 | 8 | 0.454545 | 9 | 33 | 1.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 0.363636 | 33 | 4 | 9 | 8.25 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
81a280cd490b6a936d0e0de350f933bc76dc6c1f | 35,131 | py | Python | tests/test_helpdesk_sample.py | chyroc/pylark | a54cce6b814935fd3c72668b262b54c8ee461484 | [
"Apache-2.0"
] | 7 | 2021-08-18T00:42:05.000Z | 2022-03-14T09:49:15.000Z | tests/test_helpdesk_sample.py | chyroc/pylark | a54cce6b814935fd3c72668b262b54c8ee461484 | [
"Apache-2.0"
] | null | null | null | tests/test_helpdesk_sample.py | chyroc/pylark | a54cce6b814935fd3c72668b262b54c8ee461484 | [
"Apache-2.0"
] | 1 | 2022-03-14T09:49:20.000Z | 2022-03-14T09:49:20.000Z | # Code generated by lark_sdk_gen. DO NOT EDIT.
import unittest
import pylark
import pytest
from tests.test_conf import app_all_permission, app_no_permission
from tests.test_helper import mock_get_tenant_access_token_failed
def mock(*args, **kwargs):
raise pylark.PyLarkError(scope="scope", func="func", code=1, msg="mock-failed")
def mock_raw_request(*args, **kwargs):
raise pylark.PyLarkError(
scope="scope", func="func", code=1, msg="mock-raw-request-failed"
)
# mock get token
class TestHelpdeskSampleMockGetTokenFailed(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(TestHelpdeskSampleMockGetTokenFailed, self).__init__(*args, **kwargs)
self.cli = app_all_permission.ins()
self.cli.auth.get_tenant_access_token = mock_get_tenant_access_token_failed
self.cli.auth.get_app_access_token = mock_get_tenant_access_token_failed
self.module_cli = self.cli.helpdesk
def test_mock_get_token_start_helpdesk_service(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.start_helpdesk_service(pylark.StartHelpdeskServiceReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_ticket(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket(pylark.GetHelpdeskTicketReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_ticket_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_list(pylark.GetHelpdeskTicketListReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_download_helpdesk_ticket_image(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.download_helpdesk_ticket_image(
pylark.DownloadHelpdeskTicketImageReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_answer_helpdesk_ticket_user_query(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.answer_helpdesk_ticket_user_query(
pylark.AnswerHelpdeskTicketUserQueryReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_ticket_message_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_message_list(
pylark.GetHelpdeskTicketMessageListReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_send_helpdesk_ticket_message(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.send_helpdesk_ticket_message(
pylark.SendHelpdeskTicketMessageReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_ticket_customized_field_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_customized_field_list(
pylark.GetHelpdeskTicketCustomizedFieldListReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_ticket_customized_field(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_customized_field(
pylark.GetHelpdeskTicketCustomizedFieldReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_category(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_category(pylark.GetHelpdeskCategoryReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_category_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_category_list(
pylark.GetHelpdeskCategoryListReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_faq(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq(pylark.GetHelpdeskFAQReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_faq_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq_list(pylark.GetHelpdeskFAQListReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_faq_image(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq_image(pylark.GetHelpdeskFAQImageReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_search_helpdesk_faq(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.search_helpdesk_faq(pylark.SearchHelpdeskFAQReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_agent_email(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_email(pylark.GetHelpdeskAgentEmailReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_agent_schedule(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_schedule(
pylark.GetHelpdeskAgentScheduleReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_agent_schedule_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_schedule_list(
pylark.GetHelpdeskAgentScheduleListReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_agent_skill(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill(pylark.GetHelpdeskAgentSkillReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_agent_skill_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill_list(
pylark.GetHelpdeskAgentSkillListReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_get_helpdesk_agent_skill_rule_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill_rule_list(
pylark.GetHelpdeskAgentSkillRuleListReq()
)
assert "msg=failed" in f"{e}"
def test_mock_get_token_subscribe_helpdesk_event(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.subscribe_helpdesk_event(pylark.SubscribeHelpdeskEventReq())
assert "msg=failed" in f"{e}"
def test_mock_get_token_unsubscribe_helpdesk_event(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.unsubscribe_helpdesk_event(
pylark.UnsubscribeHelpdeskEventReq()
)
assert "msg=failed" in f"{e}"
# mock mock self func
class TestHelpdeskSampleMockSelfFuncFailed(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(TestHelpdeskSampleMockSelfFuncFailed, self).__init__(*args, **kwargs)
self.cli = app_all_permission.ins()
self.module_cli = self.cli.helpdesk
def test_mock_self_func_start_helpdesk_service(self):
origin_func = self.module_cli.start_helpdesk_service
self.module_cli.start_helpdesk_service = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.start_helpdesk_service(pylark.StartHelpdeskServiceReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.start_helpdesk_service = origin_func
def test_mock_self_func_get_helpdesk_ticket(self):
origin_func = self.module_cli.get_helpdesk_ticket
self.module_cli.get_helpdesk_ticket = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket(pylark.GetHelpdeskTicketReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_ticket = origin_func
def test_mock_self_func_get_helpdesk_ticket_list(self):
origin_func = self.module_cli.get_helpdesk_ticket_list
self.module_cli.get_helpdesk_ticket_list = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_list(pylark.GetHelpdeskTicketListReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_ticket_list = origin_func
def test_mock_self_func_download_helpdesk_ticket_image(self):
origin_func = self.module_cli.download_helpdesk_ticket_image
self.module_cli.download_helpdesk_ticket_image = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.download_helpdesk_ticket_image(
pylark.DownloadHelpdeskTicketImageReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.download_helpdesk_ticket_image = origin_func
def test_mock_self_func_answer_helpdesk_ticket_user_query(self):
origin_func = self.module_cli.answer_helpdesk_ticket_user_query
self.module_cli.answer_helpdesk_ticket_user_query = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.answer_helpdesk_ticket_user_query(
pylark.AnswerHelpdeskTicketUserQueryReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.answer_helpdesk_ticket_user_query = origin_func
def test_mock_self_func_get_helpdesk_ticket_message_list(self):
origin_func = self.module_cli.get_helpdesk_ticket_message_list
self.module_cli.get_helpdesk_ticket_message_list = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_message_list(
pylark.GetHelpdeskTicketMessageListReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_ticket_message_list = origin_func
def test_mock_self_func_send_helpdesk_ticket_message(self):
origin_func = self.module_cli.send_helpdesk_ticket_message
self.module_cli.send_helpdesk_ticket_message = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.send_helpdesk_ticket_message(
pylark.SendHelpdeskTicketMessageReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.send_helpdesk_ticket_message = origin_func
def test_mock_self_func_get_helpdesk_ticket_customized_field_list(self):
origin_func = self.module_cli.get_helpdesk_ticket_customized_field_list
self.module_cli.get_helpdesk_ticket_customized_field_list = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_customized_field_list(
pylark.GetHelpdeskTicketCustomizedFieldListReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_ticket_customized_field_list = origin_func
def test_mock_self_func_get_helpdesk_ticket_customized_field(self):
origin_func = self.module_cli.get_helpdesk_ticket_customized_field
self.module_cli.get_helpdesk_ticket_customized_field = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_customized_field(
pylark.GetHelpdeskTicketCustomizedFieldReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_ticket_customized_field = origin_func
def test_mock_self_func_get_helpdesk_category(self):
origin_func = self.module_cli.get_helpdesk_category
self.module_cli.get_helpdesk_category = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_category(pylark.GetHelpdeskCategoryReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_category = origin_func
def test_mock_self_func_get_helpdesk_category_list(self):
origin_func = self.module_cli.get_helpdesk_category_list
self.module_cli.get_helpdesk_category_list = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_category_list(
pylark.GetHelpdeskCategoryListReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_category_list = origin_func
def test_mock_self_func_get_helpdesk_faq(self):
origin_func = self.module_cli.get_helpdesk_faq
self.module_cli.get_helpdesk_faq = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq(pylark.GetHelpdeskFAQReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_faq = origin_func
def test_mock_self_func_get_helpdesk_faq_list(self):
origin_func = self.module_cli.get_helpdesk_faq_list
self.module_cli.get_helpdesk_faq_list = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq_list(pylark.GetHelpdeskFAQListReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_faq_list = origin_func
def test_mock_self_func_get_helpdesk_faq_image(self):
origin_func = self.module_cli.get_helpdesk_faq_image
self.module_cli.get_helpdesk_faq_image = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq_image(pylark.GetHelpdeskFAQImageReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_faq_image = origin_func
def test_mock_self_func_search_helpdesk_faq(self):
origin_func = self.module_cli.search_helpdesk_faq
self.module_cli.search_helpdesk_faq = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.search_helpdesk_faq(pylark.SearchHelpdeskFAQReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.search_helpdesk_faq = origin_func
def test_mock_self_func_get_helpdesk_agent_email(self):
origin_func = self.module_cli.get_helpdesk_agent_email
self.module_cli.get_helpdesk_agent_email = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_email(pylark.GetHelpdeskAgentEmailReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_agent_email = origin_func
def test_mock_self_func_get_helpdesk_agent_schedule(self):
origin_func = self.module_cli.get_helpdesk_agent_schedule
self.module_cli.get_helpdesk_agent_schedule = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_schedule(
pylark.GetHelpdeskAgentScheduleReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_agent_schedule = origin_func
def test_mock_self_func_get_helpdesk_agent_schedule_list(self):
origin_func = self.module_cli.get_helpdesk_agent_schedule_list
self.module_cli.get_helpdesk_agent_schedule_list = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_schedule_list(
pylark.GetHelpdeskAgentScheduleListReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_agent_schedule_list = origin_func
def test_mock_self_func_get_helpdesk_agent_skill(self):
origin_func = self.module_cli.get_helpdesk_agent_skill
self.module_cli.get_helpdesk_agent_skill = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill(pylark.GetHelpdeskAgentSkillReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_agent_skill = origin_func
def test_mock_self_func_get_helpdesk_agent_skill_list(self):
origin_func = self.module_cli.get_helpdesk_agent_skill_list
self.module_cli.get_helpdesk_agent_skill_list = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill_list(
pylark.GetHelpdeskAgentSkillListReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_agent_skill_list = origin_func
def test_mock_self_func_get_helpdesk_agent_skill_rule_list(self):
origin_func = self.module_cli.get_helpdesk_agent_skill_rule_list
self.module_cli.get_helpdesk_agent_skill_rule_list = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill_rule_list(
pylark.GetHelpdeskAgentSkillRuleListReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.get_helpdesk_agent_skill_rule_list = origin_func
def test_mock_self_func_subscribe_helpdesk_event(self):
origin_func = self.module_cli.subscribe_helpdesk_event
self.module_cli.subscribe_helpdesk_event = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.subscribe_helpdesk_event(pylark.SubscribeHelpdeskEventReq())
assert "msg=mock-failed" in f"{e}"
self.module_cli.subscribe_helpdesk_event = origin_func
def test_mock_self_func_unsubscribe_helpdesk_event(self):
origin_func = self.module_cli.unsubscribe_helpdesk_event
self.module_cli.unsubscribe_helpdesk_event = mock
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.unsubscribe_helpdesk_event(
pylark.UnsubscribeHelpdeskEventReq()
)
assert "msg=mock-failed" in f"{e}"
self.module_cli.unsubscribe_helpdesk_event = origin_func
# mock raw request
class TestHelpdeskSampleMockRawRequestFailed(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(TestHelpdeskSampleMockRawRequestFailed, self).__init__(*args, **kwargs)
self.cli = app_all_permission.ins()
self.module_cli = self.cli.helpdesk
self.cli.raw_request = mock_raw_request
def test_mock_raw_request_start_helpdesk_service(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.start_helpdesk_service(pylark.StartHelpdeskServiceReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_ticket(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket(
pylark.GetHelpdeskTicketReq(
ticket_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_ticket_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_list(pylark.GetHelpdeskTicketListReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_download_helpdesk_ticket_image(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.download_helpdesk_ticket_image(
pylark.DownloadHelpdeskTicketImageReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_answer_helpdesk_ticket_user_query(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.answer_helpdesk_ticket_user_query(
pylark.AnswerHelpdeskTicketUserQueryReq(
ticket_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_ticket_message_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_message_list(
pylark.GetHelpdeskTicketMessageListReq(
ticket_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_send_helpdesk_ticket_message(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.send_helpdesk_ticket_message(
pylark.SendHelpdeskTicketMessageReq(
ticket_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_ticket_customized_field_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_customized_field_list(
pylark.GetHelpdeskTicketCustomizedFieldListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_ticket_customized_field(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_customized_field(
pylark.GetHelpdeskTicketCustomizedFieldReq(
ticket_customized_field_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_category(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_category(
pylark.GetHelpdeskCategoryReq(
id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_category_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_category_list(
pylark.GetHelpdeskCategoryListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_faq(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq(
pylark.GetHelpdeskFAQReq(
id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_faq_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq_list(pylark.GetHelpdeskFAQListReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_faq_image(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq_image(
pylark.GetHelpdeskFAQImageReq(
id="x",
image_key="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_search_helpdesk_faq(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.search_helpdesk_faq(pylark.SearchHelpdeskFAQReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_agent_email(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_email(pylark.GetHelpdeskAgentEmailReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_agent_schedule(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_schedule(
pylark.GetHelpdeskAgentScheduleReq(
agent_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_agent_schedule_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_schedule_list(
pylark.GetHelpdeskAgentScheduleListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_agent_skill(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill(
pylark.GetHelpdeskAgentSkillReq(
agent_skill_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_agent_skill_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill_list(
pylark.GetHelpdeskAgentSkillListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_get_helpdesk_agent_skill_rule_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill_rule_list(
pylark.GetHelpdeskAgentSkillRuleListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_subscribe_helpdesk_event(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.subscribe_helpdesk_event(pylark.SubscribeHelpdeskEventReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
def test_mock_raw_request_unsubscribe_helpdesk_event(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.unsubscribe_helpdesk_event(
pylark.UnsubscribeHelpdeskEventReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
assert "mock-raw-request-failed" in e.value.msg
# real request
class TestHelpdeskSampleRealRequestFailed(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(TestHelpdeskSampleRealRequestFailed, self).__init__(*args, **kwargs)
self.cli = app_no_permission.ins()
self.module_cli = self.cli.helpdesk
def test_real_request_start_helpdesk_service(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.start_helpdesk_service(pylark.StartHelpdeskServiceReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_ticket(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket(
pylark.GetHelpdeskTicketReq(
ticket_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_ticket_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_list(pylark.GetHelpdeskTicketListReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_download_helpdesk_ticket_image(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.download_helpdesk_ticket_image(
pylark.DownloadHelpdeskTicketImageReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_answer_helpdesk_ticket_user_query(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.answer_helpdesk_ticket_user_query(
pylark.AnswerHelpdeskTicketUserQueryReq(
ticket_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_ticket_message_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_message_list(
pylark.GetHelpdeskTicketMessageListReq(
ticket_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_send_helpdesk_ticket_message(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.send_helpdesk_ticket_message(
pylark.SendHelpdeskTicketMessageReq(
ticket_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_ticket_customized_field_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_customized_field_list(
pylark.GetHelpdeskTicketCustomizedFieldListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_ticket_customized_field(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_ticket_customized_field(
pylark.GetHelpdeskTicketCustomizedFieldReq(
ticket_customized_field_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_category(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_category(
pylark.GetHelpdeskCategoryReq(
id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_category_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_category_list(
pylark.GetHelpdeskCategoryListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_faq(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq(
pylark.GetHelpdeskFAQReq(
id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_faq_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq_list(pylark.GetHelpdeskFAQListReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_faq_image(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_faq_image(
pylark.GetHelpdeskFAQImageReq(
id="x",
image_key="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_search_helpdesk_faq(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.search_helpdesk_faq(pylark.SearchHelpdeskFAQReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_agent_email(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_email(pylark.GetHelpdeskAgentEmailReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_agent_schedule(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_schedule(
pylark.GetHelpdeskAgentScheduleReq(
agent_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_agent_schedule_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_schedule_list(
pylark.GetHelpdeskAgentScheduleListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_agent_skill(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill(
pylark.GetHelpdeskAgentSkillReq(
agent_skill_id="x",
)
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_agent_skill_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill_list(
pylark.GetHelpdeskAgentSkillListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_get_helpdesk_agent_skill_rule_list(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.get_helpdesk_agent_skill_rule_list(
pylark.GetHelpdeskAgentSkillRuleListReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_subscribe_helpdesk_event(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.subscribe_helpdesk_event(pylark.SubscribeHelpdeskEventReq())
assert e.type is pylark.PyLarkError
assert e.value.code > 0
def test_real_request_unsubscribe_helpdesk_event(self):
with pytest.raises(pylark.PyLarkError) as e:
self.module_cli.unsubscribe_helpdesk_event(
pylark.UnsubscribeHelpdeskEventReq()
)
assert e.type is pylark.PyLarkError
assert e.value.code > 0
| 38.020563 | 88 | 0.681393 | 4,293 | 35,131 | 5.242255 | 0.027021 | 0.086025 | 0.095312 | 0.07154 | 0.969696 | 0.966052 | 0.948723 | 0.924239 | 0.89078 | 0.865274 | 0 | 0.001808 | 0.244428 | 35,131 | 923 | 89 | 38.061755 | 0.846029 | 0.003103 | 0 | 0.624642 | 1 | 0 | 0.037583 | 0.015764 | 0 | 0 | 0 | 0 | 0.230659 | 1 | 0.140401 | false | 0 | 0.007163 | 0 | 0.153295 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
81c4c984eacdcc81519916ae84802a3bb6fdd17e | 92,976 | py | Python | sdk/python/pulumi_aws/ec2/spot_fleet_request.py | RafalSumislawski/pulumi-aws | 7c8a335d327c173aa32c8b3d98816e760db329fa | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-11-10T16:33:40.000Z | 2021-11-10T16:33:40.000Z | sdk/python/pulumi_aws/ec2/spot_fleet_request.py | RafalSumislawski/pulumi-aws | 7c8a335d327c173aa32c8b3d98816e760db329fa | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/ec2/spot_fleet_request.py | RafalSumislawski/pulumi-aws | 7c8a335d327c173aa32c8b3d98816e760db329fa | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['SpotFleetRequestArgs', 'SpotFleetRequest']
@pulumi.input_type
class SpotFleetRequestArgs:
def __init__(__self__, *,
iam_fleet_role: pulumi.Input[str],
target_capacity: pulumi.Input[int],
allocation_strategy: Optional[pulumi.Input[str]] = None,
excess_capacity_termination_policy: Optional[pulumi.Input[str]] = None,
fleet_type: Optional[pulumi.Input[str]] = None,
instance_interruption_behaviour: Optional[pulumi.Input[str]] = None,
instance_pools_to_use_count: Optional[pulumi.Input[int]] = None,
launch_specifications: Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchSpecificationArgs']]]] = None,
launch_template_configs: Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchTemplateConfigArgs']]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
on_demand_allocation_strategy: Optional[pulumi.Input[str]] = None,
on_demand_max_total_price: Optional[pulumi.Input[str]] = None,
on_demand_target_capacity: Optional[pulumi.Input[int]] = None,
replace_unhealthy_instances: Optional[pulumi.Input[bool]] = None,
spot_maintenance_strategies: Optional[pulumi.Input['SpotFleetRequestSpotMaintenanceStrategiesArgs']] = None,
spot_price: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
target_group_arns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
terminate_instances_with_expiration: Optional[pulumi.Input[bool]] = None,
valid_from: Optional[pulumi.Input[str]] = None,
valid_until: Optional[pulumi.Input[str]] = None,
wait_for_fulfillment: Optional[pulumi.Input[bool]] = None):
"""
The set of arguments for constructing a SpotFleetRequest resource.
:param pulumi.Input[str] iam_fleet_role: Grants the Spot fleet permission to terminate
Spot instances on your behalf when you cancel its Spot fleet request using
CancelSpotFleetRequests or when the Spot fleet request expires, if you set
terminateInstancesWithExpiration.
:param pulumi.Input[int] target_capacity: The number of units to request. You can choose to set the
target capacity in terms of instances or a performance characteristic that is
important to your application workload, such as vCPUs, memory, or I/O.
:param pulumi.Input[str] allocation_strategy: Indicates how to allocate the target capacity across
the Spot pools specified by the Spot fleet request. The default is
`lowestPrice`.
:param pulumi.Input[str] excess_capacity_termination_policy: Indicates whether running Spot
instances should be terminated if the target capacity of the Spot fleet
request is decreased below the current size of the Spot fleet.
:param pulumi.Input[str] fleet_type: The type of fleet request. Indicates whether the Spot Fleet only requests the target
capacity or also attempts to maintain it. Default is `maintain`.
:param pulumi.Input[str] instance_interruption_behaviour: Indicates whether a Spot
instance stops or terminates when it is interrupted. Default is
`terminate`.
:param pulumi.Input[int] instance_pools_to_use_count: The number of Spot pools across which to allocate your target Spot capacity.
Valid only when `allocation_strategy` is set to `lowestPrice`. Spot Fleet selects
the cheapest Spot pools and evenly allocates your target Spot capacity across
the number of Spot pools that you specify.
:param pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchSpecificationArgs']]] launch_specifications: Used to define the launch configuration of the
spot-fleet request. Can be specified multiple times to define different bids
across different markets and instance types. Conflicts with `launch_template_config`. At least one of `launch_specification` or `launch_template_config` is required.
:param pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchTemplateConfigArgs']]] launch_template_configs: Launch template configuration block. See Launch Template Configs below for more details. Conflicts with `launch_specification`. At least one of `launch_specification` or `launch_template_config` is required.
:param pulumi.Input[Sequence[pulumi.Input[str]]] load_balancers: A list of elastic load balancer names to add to the Spot fleet.
:param pulumi.Input[str] on_demand_allocation_strategy: The order of the launch template overrides to use in fulfilling On-Demand capacity. the possible values are: `lowestPrice` and `prioritized`. the default is `lowestPrice`.
:param pulumi.Input[str] on_demand_max_total_price: The maximum amount per hour for On-Demand Instances that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
:param pulumi.Input[int] on_demand_target_capacity: The number of On-Demand units to request. If the request type is `maintain`, you can specify a target capacity of 0 and add capacity later.
:param pulumi.Input[bool] replace_unhealthy_instances: Indicates whether Spot fleet should replace unhealthy instances. Default `false`.
:param pulumi.Input['SpotFleetRequestSpotMaintenanceStrategiesArgs'] spot_maintenance_strategies: Nested argument containing maintenance strategies for managing your Spot Instances that are at an elevated risk of being interrupted. Defined below.
:param pulumi.Input[str] spot_price: The maximum spot bid for this override request.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Sequence[pulumi.Input[str]]] target_group_arns: A list of `alb.TargetGroup` ARNs, for use with Application Load Balancing.
:param pulumi.Input[bool] terminate_instances_with_expiration: Indicates whether running Spot
instances should be terminated when the Spot fleet request expires.
:param pulumi.Input[str] valid_from: The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately.
:param pulumi.Input[str] valid_until: The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request.
:param pulumi.Input[bool] wait_for_fulfillment: If set, this provider will
wait for the Spot Request to be fulfilled, and will throw an error if the
timeout of 10m is reached.
"""
pulumi.set(__self__, "iam_fleet_role", iam_fleet_role)
pulumi.set(__self__, "target_capacity", target_capacity)
if allocation_strategy is not None:
pulumi.set(__self__, "allocation_strategy", allocation_strategy)
if excess_capacity_termination_policy is not None:
pulumi.set(__self__, "excess_capacity_termination_policy", excess_capacity_termination_policy)
if fleet_type is not None:
pulumi.set(__self__, "fleet_type", fleet_type)
if instance_interruption_behaviour is not None:
pulumi.set(__self__, "instance_interruption_behaviour", instance_interruption_behaviour)
if instance_pools_to_use_count is not None:
pulumi.set(__self__, "instance_pools_to_use_count", instance_pools_to_use_count)
if launch_specifications is not None:
pulumi.set(__self__, "launch_specifications", launch_specifications)
if launch_template_configs is not None:
pulumi.set(__self__, "launch_template_configs", launch_template_configs)
if load_balancers is not None:
pulumi.set(__self__, "load_balancers", load_balancers)
if on_demand_allocation_strategy is not None:
pulumi.set(__self__, "on_demand_allocation_strategy", on_demand_allocation_strategy)
if on_demand_max_total_price is not None:
pulumi.set(__self__, "on_demand_max_total_price", on_demand_max_total_price)
if on_demand_target_capacity is not None:
pulumi.set(__self__, "on_demand_target_capacity", on_demand_target_capacity)
if replace_unhealthy_instances is not None:
pulumi.set(__self__, "replace_unhealthy_instances", replace_unhealthy_instances)
if spot_maintenance_strategies is not None:
pulumi.set(__self__, "spot_maintenance_strategies", spot_maintenance_strategies)
if spot_price is not None:
pulumi.set(__self__, "spot_price", spot_price)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if target_group_arns is not None:
pulumi.set(__self__, "target_group_arns", target_group_arns)
if terminate_instances_with_expiration is not None:
pulumi.set(__self__, "terminate_instances_with_expiration", terminate_instances_with_expiration)
if valid_from is not None:
pulumi.set(__self__, "valid_from", valid_from)
if valid_until is not None:
pulumi.set(__self__, "valid_until", valid_until)
if wait_for_fulfillment is not None:
pulumi.set(__self__, "wait_for_fulfillment", wait_for_fulfillment)
@property
@pulumi.getter(name="iamFleetRole")
def iam_fleet_role(self) -> pulumi.Input[str]:
"""
Grants the Spot fleet permission to terminate
Spot instances on your behalf when you cancel its Spot fleet request using
CancelSpotFleetRequests or when the Spot fleet request expires, if you set
terminateInstancesWithExpiration.
"""
return pulumi.get(self, "iam_fleet_role")
@iam_fleet_role.setter
def iam_fleet_role(self, value: pulumi.Input[str]):
pulumi.set(self, "iam_fleet_role", value)
@property
@pulumi.getter(name="targetCapacity")
def target_capacity(self) -> pulumi.Input[int]:
"""
The number of units to request. You can choose to set the
target capacity in terms of instances or a performance characteristic that is
important to your application workload, such as vCPUs, memory, or I/O.
"""
return pulumi.get(self, "target_capacity")
@target_capacity.setter
def target_capacity(self, value: pulumi.Input[int]):
pulumi.set(self, "target_capacity", value)
@property
@pulumi.getter(name="allocationStrategy")
def allocation_strategy(self) -> Optional[pulumi.Input[str]]:
"""
Indicates how to allocate the target capacity across
the Spot pools specified by the Spot fleet request. The default is
`lowestPrice`.
"""
return pulumi.get(self, "allocation_strategy")
@allocation_strategy.setter
def allocation_strategy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "allocation_strategy", value)
@property
@pulumi.getter(name="excessCapacityTerminationPolicy")
def excess_capacity_termination_policy(self) -> Optional[pulumi.Input[str]]:
"""
Indicates whether running Spot
instances should be terminated if the target capacity of the Spot fleet
request is decreased below the current size of the Spot fleet.
"""
return pulumi.get(self, "excess_capacity_termination_policy")
@excess_capacity_termination_policy.setter
def excess_capacity_termination_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "excess_capacity_termination_policy", value)
@property
@pulumi.getter(name="fleetType")
def fleet_type(self) -> Optional[pulumi.Input[str]]:
"""
The type of fleet request. Indicates whether the Spot Fleet only requests the target
capacity or also attempts to maintain it. Default is `maintain`.
"""
return pulumi.get(self, "fleet_type")
@fleet_type.setter
def fleet_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "fleet_type", value)
@property
@pulumi.getter(name="instanceInterruptionBehaviour")
def instance_interruption_behaviour(self) -> Optional[pulumi.Input[str]]:
"""
Indicates whether a Spot
instance stops or terminates when it is interrupted. Default is
`terminate`.
"""
return pulumi.get(self, "instance_interruption_behaviour")
@instance_interruption_behaviour.setter
def instance_interruption_behaviour(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "instance_interruption_behaviour", value)
@property
@pulumi.getter(name="instancePoolsToUseCount")
def instance_pools_to_use_count(self) -> Optional[pulumi.Input[int]]:
"""
The number of Spot pools across which to allocate your target Spot capacity.
Valid only when `allocation_strategy` is set to `lowestPrice`. Spot Fleet selects
the cheapest Spot pools and evenly allocates your target Spot capacity across
the number of Spot pools that you specify.
"""
return pulumi.get(self, "instance_pools_to_use_count")
@instance_pools_to_use_count.setter
def instance_pools_to_use_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "instance_pools_to_use_count", value)
@property
@pulumi.getter(name="launchSpecifications")
def launch_specifications(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchSpecificationArgs']]]]:
"""
Used to define the launch configuration of the
spot-fleet request. Can be specified multiple times to define different bids
across different markets and instance types. Conflicts with `launch_template_config`. At least one of `launch_specification` or `launch_template_config` is required.
"""
return pulumi.get(self, "launch_specifications")
@launch_specifications.setter
def launch_specifications(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchSpecificationArgs']]]]):
pulumi.set(self, "launch_specifications", value)
@property
@pulumi.getter(name="launchTemplateConfigs")
def launch_template_configs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchTemplateConfigArgs']]]]:
"""
Launch template configuration block. See Launch Template Configs below for more details. Conflicts with `launch_specification`. At least one of `launch_specification` or `launch_template_config` is required.
"""
return pulumi.get(self, "launch_template_configs")
@launch_template_configs.setter
def launch_template_configs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchTemplateConfigArgs']]]]):
pulumi.set(self, "launch_template_configs", value)
@property
@pulumi.getter(name="loadBalancers")
def load_balancers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of elastic load balancer names to add to the Spot fleet.
"""
return pulumi.get(self, "load_balancers")
@load_balancers.setter
def load_balancers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "load_balancers", value)
@property
@pulumi.getter(name="onDemandAllocationStrategy")
def on_demand_allocation_strategy(self) -> Optional[pulumi.Input[str]]:
"""
The order of the launch template overrides to use in fulfilling On-Demand capacity. the possible values are: `lowestPrice` and `prioritized`. the default is `lowestPrice`.
"""
return pulumi.get(self, "on_demand_allocation_strategy")
@on_demand_allocation_strategy.setter
def on_demand_allocation_strategy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "on_demand_allocation_strategy", value)
@property
@pulumi.getter(name="onDemandMaxTotalPrice")
def on_demand_max_total_price(self) -> Optional[pulumi.Input[str]]:
"""
The maximum amount per hour for On-Demand Instances that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
"""
return pulumi.get(self, "on_demand_max_total_price")
@on_demand_max_total_price.setter
def on_demand_max_total_price(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "on_demand_max_total_price", value)
@property
@pulumi.getter(name="onDemandTargetCapacity")
def on_demand_target_capacity(self) -> Optional[pulumi.Input[int]]:
"""
The number of On-Demand units to request. If the request type is `maintain`, you can specify a target capacity of 0 and add capacity later.
"""
return pulumi.get(self, "on_demand_target_capacity")
@on_demand_target_capacity.setter
def on_demand_target_capacity(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "on_demand_target_capacity", value)
@property
@pulumi.getter(name="replaceUnhealthyInstances")
def replace_unhealthy_instances(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether Spot fleet should replace unhealthy instances. Default `false`.
"""
return pulumi.get(self, "replace_unhealthy_instances")
@replace_unhealthy_instances.setter
def replace_unhealthy_instances(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "replace_unhealthy_instances", value)
@property
@pulumi.getter(name="spotMaintenanceStrategies")
def spot_maintenance_strategies(self) -> Optional[pulumi.Input['SpotFleetRequestSpotMaintenanceStrategiesArgs']]:
"""
Nested argument containing maintenance strategies for managing your Spot Instances that are at an elevated risk of being interrupted. Defined below.
"""
return pulumi.get(self, "spot_maintenance_strategies")
@spot_maintenance_strategies.setter
def spot_maintenance_strategies(self, value: Optional[pulumi.Input['SpotFleetRequestSpotMaintenanceStrategiesArgs']]):
pulumi.set(self, "spot_maintenance_strategies", value)
@property
@pulumi.getter(name="spotPrice")
def spot_price(self) -> Optional[pulumi.Input[str]]:
"""
The maximum spot bid for this override request.
"""
return pulumi.get(self, "spot_price")
@spot_price.setter
def spot_price(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "spot_price", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="targetGroupArns")
def target_group_arns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of `alb.TargetGroup` ARNs, for use with Application Load Balancing.
"""
return pulumi.get(self, "target_group_arns")
@target_group_arns.setter
def target_group_arns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "target_group_arns", value)
@property
@pulumi.getter(name="terminateInstancesWithExpiration")
def terminate_instances_with_expiration(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether running Spot
instances should be terminated when the Spot fleet request expires.
"""
return pulumi.get(self, "terminate_instances_with_expiration")
@terminate_instances_with_expiration.setter
def terminate_instances_with_expiration(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "terminate_instances_with_expiration", value)
@property
@pulumi.getter(name="validFrom")
def valid_from(self) -> Optional[pulumi.Input[str]]:
"""
The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately.
"""
return pulumi.get(self, "valid_from")
@valid_from.setter
def valid_from(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "valid_from", value)
@property
@pulumi.getter(name="validUntil")
def valid_until(self) -> Optional[pulumi.Input[str]]:
"""
The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request.
"""
return pulumi.get(self, "valid_until")
@valid_until.setter
def valid_until(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "valid_until", value)
@property
@pulumi.getter(name="waitForFulfillment")
def wait_for_fulfillment(self) -> Optional[pulumi.Input[bool]]:
"""
If set, this provider will
wait for the Spot Request to be fulfilled, and will throw an error if the
timeout of 10m is reached.
"""
return pulumi.get(self, "wait_for_fulfillment")
@wait_for_fulfillment.setter
def wait_for_fulfillment(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "wait_for_fulfillment", value)
@pulumi.input_type
class _SpotFleetRequestState:
def __init__(__self__, *,
allocation_strategy: Optional[pulumi.Input[str]] = None,
client_token: Optional[pulumi.Input[str]] = None,
excess_capacity_termination_policy: Optional[pulumi.Input[str]] = None,
fleet_type: Optional[pulumi.Input[str]] = None,
iam_fleet_role: Optional[pulumi.Input[str]] = None,
instance_interruption_behaviour: Optional[pulumi.Input[str]] = None,
instance_pools_to_use_count: Optional[pulumi.Input[int]] = None,
launch_specifications: Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchSpecificationArgs']]]] = None,
launch_template_configs: Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchTemplateConfigArgs']]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
on_demand_allocation_strategy: Optional[pulumi.Input[str]] = None,
on_demand_max_total_price: Optional[pulumi.Input[str]] = None,
on_demand_target_capacity: Optional[pulumi.Input[int]] = None,
replace_unhealthy_instances: Optional[pulumi.Input[bool]] = None,
spot_maintenance_strategies: Optional[pulumi.Input['SpotFleetRequestSpotMaintenanceStrategiesArgs']] = None,
spot_price: Optional[pulumi.Input[str]] = None,
spot_request_state: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
target_capacity: Optional[pulumi.Input[int]] = None,
target_group_arns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
terminate_instances_with_expiration: Optional[pulumi.Input[bool]] = None,
valid_from: Optional[pulumi.Input[str]] = None,
valid_until: Optional[pulumi.Input[str]] = None,
wait_for_fulfillment: Optional[pulumi.Input[bool]] = None):
"""
Input properties used for looking up and filtering SpotFleetRequest resources.
:param pulumi.Input[str] allocation_strategy: Indicates how to allocate the target capacity across
the Spot pools specified by the Spot fleet request. The default is
`lowestPrice`.
:param pulumi.Input[str] excess_capacity_termination_policy: Indicates whether running Spot
instances should be terminated if the target capacity of the Spot fleet
request is decreased below the current size of the Spot fleet.
:param pulumi.Input[str] fleet_type: The type of fleet request. Indicates whether the Spot Fleet only requests the target
capacity or also attempts to maintain it. Default is `maintain`.
:param pulumi.Input[str] iam_fleet_role: Grants the Spot fleet permission to terminate
Spot instances on your behalf when you cancel its Spot fleet request using
CancelSpotFleetRequests or when the Spot fleet request expires, if you set
terminateInstancesWithExpiration.
:param pulumi.Input[str] instance_interruption_behaviour: Indicates whether a Spot
instance stops or terminates when it is interrupted. Default is
`terminate`.
:param pulumi.Input[int] instance_pools_to_use_count: The number of Spot pools across which to allocate your target Spot capacity.
Valid only when `allocation_strategy` is set to `lowestPrice`. Spot Fleet selects
the cheapest Spot pools and evenly allocates your target Spot capacity across
the number of Spot pools that you specify.
:param pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchSpecificationArgs']]] launch_specifications: Used to define the launch configuration of the
spot-fleet request. Can be specified multiple times to define different bids
across different markets and instance types. Conflicts with `launch_template_config`. At least one of `launch_specification` or `launch_template_config` is required.
:param pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchTemplateConfigArgs']]] launch_template_configs: Launch template configuration block. See Launch Template Configs below for more details. Conflicts with `launch_specification`. At least one of `launch_specification` or `launch_template_config` is required.
:param pulumi.Input[Sequence[pulumi.Input[str]]] load_balancers: A list of elastic load balancer names to add to the Spot fleet.
:param pulumi.Input[str] on_demand_allocation_strategy: The order of the launch template overrides to use in fulfilling On-Demand capacity. the possible values are: `lowestPrice` and `prioritized`. the default is `lowestPrice`.
:param pulumi.Input[str] on_demand_max_total_price: The maximum amount per hour for On-Demand Instances that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
:param pulumi.Input[int] on_demand_target_capacity: The number of On-Demand units to request. If the request type is `maintain`, you can specify a target capacity of 0 and add capacity later.
:param pulumi.Input[bool] replace_unhealthy_instances: Indicates whether Spot fleet should replace unhealthy instances. Default `false`.
:param pulumi.Input['SpotFleetRequestSpotMaintenanceStrategiesArgs'] spot_maintenance_strategies: Nested argument containing maintenance strategies for managing your Spot Instances that are at an elevated risk of being interrupted. Defined below.
:param pulumi.Input[str] spot_price: The maximum spot bid for this override request.
:param pulumi.Input[str] spot_request_state: The state of the Spot fleet request.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider .
:param pulumi.Input[int] target_capacity: The number of units to request. You can choose to set the
target capacity in terms of instances or a performance characteristic that is
important to your application workload, such as vCPUs, memory, or I/O.
:param pulumi.Input[Sequence[pulumi.Input[str]]] target_group_arns: A list of `alb.TargetGroup` ARNs, for use with Application Load Balancing.
:param pulumi.Input[bool] terminate_instances_with_expiration: Indicates whether running Spot
instances should be terminated when the Spot fleet request expires.
:param pulumi.Input[str] valid_from: The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately.
:param pulumi.Input[str] valid_until: The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request.
:param pulumi.Input[bool] wait_for_fulfillment: If set, this provider will
wait for the Spot Request to be fulfilled, and will throw an error if the
timeout of 10m is reached.
"""
if allocation_strategy is not None:
pulumi.set(__self__, "allocation_strategy", allocation_strategy)
if client_token is not None:
pulumi.set(__self__, "client_token", client_token)
if excess_capacity_termination_policy is not None:
pulumi.set(__self__, "excess_capacity_termination_policy", excess_capacity_termination_policy)
if fleet_type is not None:
pulumi.set(__self__, "fleet_type", fleet_type)
if iam_fleet_role is not None:
pulumi.set(__self__, "iam_fleet_role", iam_fleet_role)
if instance_interruption_behaviour is not None:
pulumi.set(__self__, "instance_interruption_behaviour", instance_interruption_behaviour)
if instance_pools_to_use_count is not None:
pulumi.set(__self__, "instance_pools_to_use_count", instance_pools_to_use_count)
if launch_specifications is not None:
pulumi.set(__self__, "launch_specifications", launch_specifications)
if launch_template_configs is not None:
pulumi.set(__self__, "launch_template_configs", launch_template_configs)
if load_balancers is not None:
pulumi.set(__self__, "load_balancers", load_balancers)
if on_demand_allocation_strategy is not None:
pulumi.set(__self__, "on_demand_allocation_strategy", on_demand_allocation_strategy)
if on_demand_max_total_price is not None:
pulumi.set(__self__, "on_demand_max_total_price", on_demand_max_total_price)
if on_demand_target_capacity is not None:
pulumi.set(__self__, "on_demand_target_capacity", on_demand_target_capacity)
if replace_unhealthy_instances is not None:
pulumi.set(__self__, "replace_unhealthy_instances", replace_unhealthy_instances)
if spot_maintenance_strategies is not None:
pulumi.set(__self__, "spot_maintenance_strategies", spot_maintenance_strategies)
if spot_price is not None:
pulumi.set(__self__, "spot_price", spot_price)
if spot_request_state is not None:
pulumi.set(__self__, "spot_request_state", spot_request_state)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if tags_all is not None:
pulumi.set(__self__, "tags_all", tags_all)
if target_capacity is not None:
pulumi.set(__self__, "target_capacity", target_capacity)
if target_group_arns is not None:
pulumi.set(__self__, "target_group_arns", target_group_arns)
if terminate_instances_with_expiration is not None:
pulumi.set(__self__, "terminate_instances_with_expiration", terminate_instances_with_expiration)
if valid_from is not None:
pulumi.set(__self__, "valid_from", valid_from)
if valid_until is not None:
pulumi.set(__self__, "valid_until", valid_until)
if wait_for_fulfillment is not None:
pulumi.set(__self__, "wait_for_fulfillment", wait_for_fulfillment)
@property
@pulumi.getter(name="allocationStrategy")
def allocation_strategy(self) -> Optional[pulumi.Input[str]]:
"""
Indicates how to allocate the target capacity across
the Spot pools specified by the Spot fleet request. The default is
`lowestPrice`.
"""
return pulumi.get(self, "allocation_strategy")
@allocation_strategy.setter
def allocation_strategy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "allocation_strategy", value)
@property
@pulumi.getter(name="clientToken")
def client_token(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "client_token")
@client_token.setter
def client_token(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "client_token", value)
@property
@pulumi.getter(name="excessCapacityTerminationPolicy")
def excess_capacity_termination_policy(self) -> Optional[pulumi.Input[str]]:
"""
Indicates whether running Spot
instances should be terminated if the target capacity of the Spot fleet
request is decreased below the current size of the Spot fleet.
"""
return pulumi.get(self, "excess_capacity_termination_policy")
@excess_capacity_termination_policy.setter
def excess_capacity_termination_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "excess_capacity_termination_policy", value)
@property
@pulumi.getter(name="fleetType")
def fleet_type(self) -> Optional[pulumi.Input[str]]:
"""
The type of fleet request. Indicates whether the Spot Fleet only requests the target
capacity or also attempts to maintain it. Default is `maintain`.
"""
return pulumi.get(self, "fleet_type")
@fleet_type.setter
def fleet_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "fleet_type", value)
@property
@pulumi.getter(name="iamFleetRole")
def iam_fleet_role(self) -> Optional[pulumi.Input[str]]:
"""
Grants the Spot fleet permission to terminate
Spot instances on your behalf when you cancel its Spot fleet request using
CancelSpotFleetRequests or when the Spot fleet request expires, if you set
terminateInstancesWithExpiration.
"""
return pulumi.get(self, "iam_fleet_role")
@iam_fleet_role.setter
def iam_fleet_role(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "iam_fleet_role", value)
@property
@pulumi.getter(name="instanceInterruptionBehaviour")
def instance_interruption_behaviour(self) -> Optional[pulumi.Input[str]]:
"""
Indicates whether a Spot
instance stops or terminates when it is interrupted. Default is
`terminate`.
"""
return pulumi.get(self, "instance_interruption_behaviour")
@instance_interruption_behaviour.setter
def instance_interruption_behaviour(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "instance_interruption_behaviour", value)
@property
@pulumi.getter(name="instancePoolsToUseCount")
def instance_pools_to_use_count(self) -> Optional[pulumi.Input[int]]:
"""
The number of Spot pools across which to allocate your target Spot capacity.
Valid only when `allocation_strategy` is set to `lowestPrice`. Spot Fleet selects
the cheapest Spot pools and evenly allocates your target Spot capacity across
the number of Spot pools that you specify.
"""
return pulumi.get(self, "instance_pools_to_use_count")
@instance_pools_to_use_count.setter
def instance_pools_to_use_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "instance_pools_to_use_count", value)
@property
@pulumi.getter(name="launchSpecifications")
def launch_specifications(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchSpecificationArgs']]]]:
"""
Used to define the launch configuration of the
spot-fleet request. Can be specified multiple times to define different bids
across different markets and instance types. Conflicts with `launch_template_config`. At least one of `launch_specification` or `launch_template_config` is required.
"""
return pulumi.get(self, "launch_specifications")
@launch_specifications.setter
def launch_specifications(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchSpecificationArgs']]]]):
pulumi.set(self, "launch_specifications", value)
@property
@pulumi.getter(name="launchTemplateConfigs")
def launch_template_configs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchTemplateConfigArgs']]]]:
"""
Launch template configuration block. See Launch Template Configs below for more details. Conflicts with `launch_specification`. At least one of `launch_specification` or `launch_template_config` is required.
"""
return pulumi.get(self, "launch_template_configs")
@launch_template_configs.setter
def launch_template_configs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['SpotFleetRequestLaunchTemplateConfigArgs']]]]):
pulumi.set(self, "launch_template_configs", value)
@property
@pulumi.getter(name="loadBalancers")
def load_balancers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of elastic load balancer names to add to the Spot fleet.
"""
return pulumi.get(self, "load_balancers")
@load_balancers.setter
def load_balancers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "load_balancers", value)
@property
@pulumi.getter(name="onDemandAllocationStrategy")
def on_demand_allocation_strategy(self) -> Optional[pulumi.Input[str]]:
"""
The order of the launch template overrides to use in fulfilling On-Demand capacity. the possible values are: `lowestPrice` and `prioritized`. the default is `lowestPrice`.
"""
return pulumi.get(self, "on_demand_allocation_strategy")
@on_demand_allocation_strategy.setter
def on_demand_allocation_strategy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "on_demand_allocation_strategy", value)
@property
@pulumi.getter(name="onDemandMaxTotalPrice")
def on_demand_max_total_price(self) -> Optional[pulumi.Input[str]]:
"""
The maximum amount per hour for On-Demand Instances that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
"""
return pulumi.get(self, "on_demand_max_total_price")
@on_demand_max_total_price.setter
def on_demand_max_total_price(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "on_demand_max_total_price", value)
@property
@pulumi.getter(name="onDemandTargetCapacity")
def on_demand_target_capacity(self) -> Optional[pulumi.Input[int]]:
"""
The number of On-Demand units to request. If the request type is `maintain`, you can specify a target capacity of 0 and add capacity later.
"""
return pulumi.get(self, "on_demand_target_capacity")
@on_demand_target_capacity.setter
def on_demand_target_capacity(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "on_demand_target_capacity", value)
@property
@pulumi.getter(name="replaceUnhealthyInstances")
def replace_unhealthy_instances(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether Spot fleet should replace unhealthy instances. Default `false`.
"""
return pulumi.get(self, "replace_unhealthy_instances")
@replace_unhealthy_instances.setter
def replace_unhealthy_instances(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "replace_unhealthy_instances", value)
@property
@pulumi.getter(name="spotMaintenanceStrategies")
def spot_maintenance_strategies(self) -> Optional[pulumi.Input['SpotFleetRequestSpotMaintenanceStrategiesArgs']]:
"""
Nested argument containing maintenance strategies for managing your Spot Instances that are at an elevated risk of being interrupted. Defined below.
"""
return pulumi.get(self, "spot_maintenance_strategies")
@spot_maintenance_strategies.setter
def spot_maintenance_strategies(self, value: Optional[pulumi.Input['SpotFleetRequestSpotMaintenanceStrategiesArgs']]):
pulumi.set(self, "spot_maintenance_strategies", value)
@property
@pulumi.getter(name="spotPrice")
def spot_price(self) -> Optional[pulumi.Input[str]]:
"""
The maximum spot bid for this override request.
"""
return pulumi.get(self, "spot_price")
@spot_price.setter
def spot_price(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "spot_price", value)
@property
@pulumi.getter(name="spotRequestState")
def spot_request_state(self) -> Optional[pulumi.Input[str]]:
"""
The state of the Spot fleet request.
"""
return pulumi.get(self, "spot_request_state")
@spot_request_state.setter
def spot_request_state(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "spot_request_state", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags assigned to the resource, including those inherited from the provider .
"""
return pulumi.get(self, "tags_all")
@tags_all.setter
def tags_all(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags_all", value)
@property
@pulumi.getter(name="targetCapacity")
def target_capacity(self) -> Optional[pulumi.Input[int]]:
"""
The number of units to request. You can choose to set the
target capacity in terms of instances or a performance characteristic that is
important to your application workload, such as vCPUs, memory, or I/O.
"""
return pulumi.get(self, "target_capacity")
@target_capacity.setter
def target_capacity(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "target_capacity", value)
@property
@pulumi.getter(name="targetGroupArns")
def target_group_arns(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of `alb.TargetGroup` ARNs, for use with Application Load Balancing.
"""
return pulumi.get(self, "target_group_arns")
@target_group_arns.setter
def target_group_arns(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "target_group_arns", value)
@property
@pulumi.getter(name="terminateInstancesWithExpiration")
def terminate_instances_with_expiration(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether running Spot
instances should be terminated when the Spot fleet request expires.
"""
return pulumi.get(self, "terminate_instances_with_expiration")
@terminate_instances_with_expiration.setter
def terminate_instances_with_expiration(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "terminate_instances_with_expiration", value)
@property
@pulumi.getter(name="validFrom")
def valid_from(self) -> Optional[pulumi.Input[str]]:
"""
The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately.
"""
return pulumi.get(self, "valid_from")
@valid_from.setter
def valid_from(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "valid_from", value)
@property
@pulumi.getter(name="validUntil")
def valid_until(self) -> Optional[pulumi.Input[str]]:
"""
The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request.
"""
return pulumi.get(self, "valid_until")
@valid_until.setter
def valid_until(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "valid_until", value)
@property
@pulumi.getter(name="waitForFulfillment")
def wait_for_fulfillment(self) -> Optional[pulumi.Input[bool]]:
"""
If set, this provider will
wait for the Spot Request to be fulfilled, and will throw an error if the
timeout of 10m is reached.
"""
return pulumi.get(self, "wait_for_fulfillment")
@wait_for_fulfillment.setter
def wait_for_fulfillment(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "wait_for_fulfillment", value)
class SpotFleetRequest(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
allocation_strategy: Optional[pulumi.Input[str]] = None,
excess_capacity_termination_policy: Optional[pulumi.Input[str]] = None,
fleet_type: Optional[pulumi.Input[str]] = None,
iam_fleet_role: Optional[pulumi.Input[str]] = None,
instance_interruption_behaviour: Optional[pulumi.Input[str]] = None,
instance_pools_to_use_count: Optional[pulumi.Input[int]] = None,
launch_specifications: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchSpecificationArgs']]]]] = None,
launch_template_configs: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchTemplateConfigArgs']]]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
on_demand_allocation_strategy: Optional[pulumi.Input[str]] = None,
on_demand_max_total_price: Optional[pulumi.Input[str]] = None,
on_demand_target_capacity: Optional[pulumi.Input[int]] = None,
replace_unhealthy_instances: Optional[pulumi.Input[bool]] = None,
spot_maintenance_strategies: Optional[pulumi.Input[pulumi.InputType['SpotFleetRequestSpotMaintenanceStrategiesArgs']]] = None,
spot_price: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
target_capacity: Optional[pulumi.Input[int]] = None,
target_group_arns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
terminate_instances_with_expiration: Optional[pulumi.Input[bool]] = None,
valid_from: Optional[pulumi.Input[str]] = None,
valid_until: Optional[pulumi.Input[str]] = None,
wait_for_fulfillment: Optional[pulumi.Input[bool]] = None,
__props__=None):
"""
Provides an EC2 Spot Fleet Request resource. This allows a fleet of Spot
instances to be requested on the Spot market.
## Example Usage
### Using launch specifications
```python
import pulumi
import pulumi_aws as aws
# Request a Spot fleet
cheap_compute = aws.ec2.SpotFleetRequest("cheapCompute",
iam_fleet_role="arn:aws:iam::12345678:role/spot-fleet",
spot_price="0.03",
allocation_strategy="diversified",
target_capacity=6,
valid_until="2019-11-04T20:44:20Z",
launch_specifications=[
aws.ec2.SpotFleetRequestLaunchSpecificationArgs(
instance_type="m4.10xlarge",
ami="ami-1234",
spot_price="2.793",
placement_tenancy="dedicated",
iam_instance_profile_arn=aws_iam_instance_profile["example"]["arn"],
),
aws.ec2.SpotFleetRequestLaunchSpecificationArgs(
instance_type="m4.4xlarge",
ami="ami-5678",
key_name="my-key",
spot_price="1.117",
iam_instance_profile_arn=aws_iam_instance_profile["example"]["arn"],
availability_zone="us-west-1a",
subnet_id="subnet-1234",
weighted_capacity="35",
root_block_devices=[aws.ec2.SpotFleetRequestLaunchSpecificationRootBlockDeviceArgs(
volume_size=300,
volume_type="gp2",
)],
tags={
"Name": "spot-fleet-example",
},
),
])
```
### Using launch templates
```python
import pulumi
import pulumi_aws as aws
foo_launch_template = aws.ec2.LaunchTemplate("fooLaunchTemplate",
image_id="ami-516b9131",
instance_type="m1.small",
key_name="some-key")
foo_spot_fleet_request = aws.ec2.SpotFleetRequest("fooSpotFleetRequest",
iam_fleet_role="arn:aws:iam::12345678:role/spot-fleet",
spot_price="0.005",
target_capacity=2,
valid_until="2019-11-04T20:44:20Z",
launch_template_configs=[aws.ec2.SpotFleetRequestLaunchTemplateConfigArgs(
launch_template_specification=aws.ec2.SpotFleetRequestLaunchTemplateConfigLaunchTemplateSpecificationArgs(
id=foo_launch_template.id,
version=foo_launch_template.latest_version,
),
)],
opts=pulumi.ResourceOptions(depends_on=[aws_iam_policy_attachment["test-attach"]]))
```
> **NOTE:** This provider does not support the functionality where multiple `subnet_id` or `availability_zone` parameters can be specified in the same
launch configuration block. If you want to specify multiple values, then separate launch configuration blocks should be used:
### Using multiple launch specifications
```python
import pulumi
import pulumi_aws as aws
foo = aws.ec2.SpotFleetRequest("foo",
iam_fleet_role="arn:aws:iam::12345678:role/spot-fleet",
launch_specifications=[
aws.ec2.SpotFleetRequestLaunchSpecificationArgs(
ami="ami-d06a90b0",
availability_zone="us-west-2a",
instance_type="m1.small",
key_name="my-key",
),
aws.ec2.SpotFleetRequestLaunchSpecificationArgs(
ami="ami-d06a90b0",
availability_zone="us-west-2a",
instance_type="m5.large",
key_name="my-key",
),
],
spot_price="0.005",
target_capacity=2,
valid_until="2019-11-04T20:44:20Z")
```
### Using multiple launch configurations
```python
import pulumi
import pulumi_aws as aws
example = aws.ec2.get_subnet_ids(vpc_id=var["vpc_id"])
foo_launch_template = aws.ec2.LaunchTemplate("fooLaunchTemplate",
image_id="ami-516b9131",
instance_type="m1.small",
key_name="some-key")
foo_spot_fleet_request = aws.ec2.SpotFleetRequest("fooSpotFleetRequest",
iam_fleet_role="arn:aws:iam::12345678:role/spot-fleet",
spot_price="0.005",
target_capacity=2,
valid_until="2019-11-04T20:44:20Z",
launch_template_configs=[aws.ec2.SpotFleetRequestLaunchTemplateConfigArgs(
launch_template_specification=aws.ec2.SpotFleetRequestLaunchTemplateConfigLaunchTemplateSpecificationArgs(
id=foo_launch_template.id,
version=foo_launch_template.latest_version,
),
overrides=[
aws.ec2.SpotFleetRequestLaunchTemplateConfigOverrideArgs(
subnet_id=data["aws_subnets"]["example"]["ids"],
),
aws.ec2.SpotFleetRequestLaunchTemplateConfigOverrideArgs(
subnet_id=data["aws_subnets"]["example"]["ids"],
),
aws.ec2.SpotFleetRequestLaunchTemplateConfigOverrideArgs(
subnet_id=data["aws_subnets"]["example"]["ids"],
),
],
)],
opts=pulumi.ResourceOptions(depends_on=[aws_iam_policy_attachment["test-attach"]]))
```
## Import
Spot Fleet Requests can be imported using `id`, e.g.,
```sh
$ pulumi import aws:ec2/spotFleetRequest:SpotFleetRequest fleet sfr-005e9ec8-5546-4c31-b317-31a62325411e
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] allocation_strategy: Indicates how to allocate the target capacity across
the Spot pools specified by the Spot fleet request. The default is
`lowestPrice`.
:param pulumi.Input[str] excess_capacity_termination_policy: Indicates whether running Spot
instances should be terminated if the target capacity of the Spot fleet
request is decreased below the current size of the Spot fleet.
:param pulumi.Input[str] fleet_type: The type of fleet request. Indicates whether the Spot Fleet only requests the target
capacity or also attempts to maintain it. Default is `maintain`.
:param pulumi.Input[str] iam_fleet_role: Grants the Spot fleet permission to terminate
Spot instances on your behalf when you cancel its Spot fleet request using
CancelSpotFleetRequests or when the Spot fleet request expires, if you set
terminateInstancesWithExpiration.
:param pulumi.Input[str] instance_interruption_behaviour: Indicates whether a Spot
instance stops or terminates when it is interrupted. Default is
`terminate`.
:param pulumi.Input[int] instance_pools_to_use_count: The number of Spot pools across which to allocate your target Spot capacity.
Valid only when `allocation_strategy` is set to `lowestPrice`. Spot Fleet selects
the cheapest Spot pools and evenly allocates your target Spot capacity across
the number of Spot pools that you specify.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchSpecificationArgs']]]] launch_specifications: Used to define the launch configuration of the
spot-fleet request. Can be specified multiple times to define different bids
across different markets and instance types. Conflicts with `launch_template_config`. At least one of `launch_specification` or `launch_template_config` is required.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchTemplateConfigArgs']]]] launch_template_configs: Launch template configuration block. See Launch Template Configs below for more details. Conflicts with `launch_specification`. At least one of `launch_specification` or `launch_template_config` is required.
:param pulumi.Input[Sequence[pulumi.Input[str]]] load_balancers: A list of elastic load balancer names to add to the Spot fleet.
:param pulumi.Input[str] on_demand_allocation_strategy: The order of the launch template overrides to use in fulfilling On-Demand capacity. the possible values are: `lowestPrice` and `prioritized`. the default is `lowestPrice`.
:param pulumi.Input[str] on_demand_max_total_price: The maximum amount per hour for On-Demand Instances that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
:param pulumi.Input[int] on_demand_target_capacity: The number of On-Demand units to request. If the request type is `maintain`, you can specify a target capacity of 0 and add capacity later.
:param pulumi.Input[bool] replace_unhealthy_instances: Indicates whether Spot fleet should replace unhealthy instances. Default `false`.
:param pulumi.Input[pulumi.InputType['SpotFleetRequestSpotMaintenanceStrategiesArgs']] spot_maintenance_strategies: Nested argument containing maintenance strategies for managing your Spot Instances that are at an elevated risk of being interrupted. Defined below.
:param pulumi.Input[str] spot_price: The maximum spot bid for this override request.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[int] target_capacity: The number of units to request. You can choose to set the
target capacity in terms of instances or a performance characteristic that is
important to your application workload, such as vCPUs, memory, or I/O.
:param pulumi.Input[Sequence[pulumi.Input[str]]] target_group_arns: A list of `alb.TargetGroup` ARNs, for use with Application Load Balancing.
:param pulumi.Input[bool] terminate_instances_with_expiration: Indicates whether running Spot
instances should be terminated when the Spot fleet request expires.
:param pulumi.Input[str] valid_from: The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately.
:param pulumi.Input[str] valid_until: The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request.
:param pulumi.Input[bool] wait_for_fulfillment: If set, this provider will
wait for the Spot Request to be fulfilled, and will throw an error if the
timeout of 10m is reached.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: SpotFleetRequestArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides an EC2 Spot Fleet Request resource. This allows a fleet of Spot
instances to be requested on the Spot market.
## Example Usage
### Using launch specifications
```python
import pulumi
import pulumi_aws as aws
# Request a Spot fleet
cheap_compute = aws.ec2.SpotFleetRequest("cheapCompute",
iam_fleet_role="arn:aws:iam::12345678:role/spot-fleet",
spot_price="0.03",
allocation_strategy="diversified",
target_capacity=6,
valid_until="2019-11-04T20:44:20Z",
launch_specifications=[
aws.ec2.SpotFleetRequestLaunchSpecificationArgs(
instance_type="m4.10xlarge",
ami="ami-1234",
spot_price="2.793",
placement_tenancy="dedicated",
iam_instance_profile_arn=aws_iam_instance_profile["example"]["arn"],
),
aws.ec2.SpotFleetRequestLaunchSpecificationArgs(
instance_type="m4.4xlarge",
ami="ami-5678",
key_name="my-key",
spot_price="1.117",
iam_instance_profile_arn=aws_iam_instance_profile["example"]["arn"],
availability_zone="us-west-1a",
subnet_id="subnet-1234",
weighted_capacity="35",
root_block_devices=[aws.ec2.SpotFleetRequestLaunchSpecificationRootBlockDeviceArgs(
volume_size=300,
volume_type="gp2",
)],
tags={
"Name": "spot-fleet-example",
},
),
])
```
### Using launch templates
```python
import pulumi
import pulumi_aws as aws
foo_launch_template = aws.ec2.LaunchTemplate("fooLaunchTemplate",
image_id="ami-516b9131",
instance_type="m1.small",
key_name="some-key")
foo_spot_fleet_request = aws.ec2.SpotFleetRequest("fooSpotFleetRequest",
iam_fleet_role="arn:aws:iam::12345678:role/spot-fleet",
spot_price="0.005",
target_capacity=2,
valid_until="2019-11-04T20:44:20Z",
launch_template_configs=[aws.ec2.SpotFleetRequestLaunchTemplateConfigArgs(
launch_template_specification=aws.ec2.SpotFleetRequestLaunchTemplateConfigLaunchTemplateSpecificationArgs(
id=foo_launch_template.id,
version=foo_launch_template.latest_version,
),
)],
opts=pulumi.ResourceOptions(depends_on=[aws_iam_policy_attachment["test-attach"]]))
```
> **NOTE:** This provider does not support the functionality where multiple `subnet_id` or `availability_zone` parameters can be specified in the same
launch configuration block. If you want to specify multiple values, then separate launch configuration blocks should be used:
### Using multiple launch specifications
```python
import pulumi
import pulumi_aws as aws
foo = aws.ec2.SpotFleetRequest("foo",
iam_fleet_role="arn:aws:iam::12345678:role/spot-fleet",
launch_specifications=[
aws.ec2.SpotFleetRequestLaunchSpecificationArgs(
ami="ami-d06a90b0",
availability_zone="us-west-2a",
instance_type="m1.small",
key_name="my-key",
),
aws.ec2.SpotFleetRequestLaunchSpecificationArgs(
ami="ami-d06a90b0",
availability_zone="us-west-2a",
instance_type="m5.large",
key_name="my-key",
),
],
spot_price="0.005",
target_capacity=2,
valid_until="2019-11-04T20:44:20Z")
```
### Using multiple launch configurations
```python
import pulumi
import pulumi_aws as aws
example = aws.ec2.get_subnet_ids(vpc_id=var["vpc_id"])
foo_launch_template = aws.ec2.LaunchTemplate("fooLaunchTemplate",
image_id="ami-516b9131",
instance_type="m1.small",
key_name="some-key")
foo_spot_fleet_request = aws.ec2.SpotFleetRequest("fooSpotFleetRequest",
iam_fleet_role="arn:aws:iam::12345678:role/spot-fleet",
spot_price="0.005",
target_capacity=2,
valid_until="2019-11-04T20:44:20Z",
launch_template_configs=[aws.ec2.SpotFleetRequestLaunchTemplateConfigArgs(
launch_template_specification=aws.ec2.SpotFleetRequestLaunchTemplateConfigLaunchTemplateSpecificationArgs(
id=foo_launch_template.id,
version=foo_launch_template.latest_version,
),
overrides=[
aws.ec2.SpotFleetRequestLaunchTemplateConfigOverrideArgs(
subnet_id=data["aws_subnets"]["example"]["ids"],
),
aws.ec2.SpotFleetRequestLaunchTemplateConfigOverrideArgs(
subnet_id=data["aws_subnets"]["example"]["ids"],
),
aws.ec2.SpotFleetRequestLaunchTemplateConfigOverrideArgs(
subnet_id=data["aws_subnets"]["example"]["ids"],
),
],
)],
opts=pulumi.ResourceOptions(depends_on=[aws_iam_policy_attachment["test-attach"]]))
```
## Import
Spot Fleet Requests can be imported using `id`, e.g.,
```sh
$ pulumi import aws:ec2/spotFleetRequest:SpotFleetRequest fleet sfr-005e9ec8-5546-4c31-b317-31a62325411e
```
:param str resource_name: The name of the resource.
:param SpotFleetRequestArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(SpotFleetRequestArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
allocation_strategy: Optional[pulumi.Input[str]] = None,
excess_capacity_termination_policy: Optional[pulumi.Input[str]] = None,
fleet_type: Optional[pulumi.Input[str]] = None,
iam_fleet_role: Optional[pulumi.Input[str]] = None,
instance_interruption_behaviour: Optional[pulumi.Input[str]] = None,
instance_pools_to_use_count: Optional[pulumi.Input[int]] = None,
launch_specifications: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchSpecificationArgs']]]]] = None,
launch_template_configs: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchTemplateConfigArgs']]]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
on_demand_allocation_strategy: Optional[pulumi.Input[str]] = None,
on_demand_max_total_price: Optional[pulumi.Input[str]] = None,
on_demand_target_capacity: Optional[pulumi.Input[int]] = None,
replace_unhealthy_instances: Optional[pulumi.Input[bool]] = None,
spot_maintenance_strategies: Optional[pulumi.Input[pulumi.InputType['SpotFleetRequestSpotMaintenanceStrategiesArgs']]] = None,
spot_price: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
target_capacity: Optional[pulumi.Input[int]] = None,
target_group_arns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
terminate_instances_with_expiration: Optional[pulumi.Input[bool]] = None,
valid_from: Optional[pulumi.Input[str]] = None,
valid_until: Optional[pulumi.Input[str]] = None,
wait_for_fulfillment: Optional[pulumi.Input[bool]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = SpotFleetRequestArgs.__new__(SpotFleetRequestArgs)
__props__.__dict__["allocation_strategy"] = allocation_strategy
__props__.__dict__["excess_capacity_termination_policy"] = excess_capacity_termination_policy
__props__.__dict__["fleet_type"] = fleet_type
if iam_fleet_role is None and not opts.urn:
raise TypeError("Missing required property 'iam_fleet_role'")
__props__.__dict__["iam_fleet_role"] = iam_fleet_role
__props__.__dict__["instance_interruption_behaviour"] = instance_interruption_behaviour
__props__.__dict__["instance_pools_to_use_count"] = instance_pools_to_use_count
__props__.__dict__["launch_specifications"] = launch_specifications
__props__.__dict__["launch_template_configs"] = launch_template_configs
__props__.__dict__["load_balancers"] = load_balancers
__props__.__dict__["on_demand_allocation_strategy"] = on_demand_allocation_strategy
__props__.__dict__["on_demand_max_total_price"] = on_demand_max_total_price
__props__.__dict__["on_demand_target_capacity"] = on_demand_target_capacity
__props__.__dict__["replace_unhealthy_instances"] = replace_unhealthy_instances
__props__.__dict__["spot_maintenance_strategies"] = spot_maintenance_strategies
__props__.__dict__["spot_price"] = spot_price
__props__.__dict__["tags"] = tags
if target_capacity is None and not opts.urn:
raise TypeError("Missing required property 'target_capacity'")
__props__.__dict__["target_capacity"] = target_capacity
__props__.__dict__["target_group_arns"] = target_group_arns
__props__.__dict__["terminate_instances_with_expiration"] = terminate_instances_with_expiration
__props__.__dict__["valid_from"] = valid_from
__props__.__dict__["valid_until"] = valid_until
__props__.__dict__["wait_for_fulfillment"] = wait_for_fulfillment
__props__.__dict__["client_token"] = None
__props__.__dict__["spot_request_state"] = None
__props__.__dict__["tags_all"] = None
super(SpotFleetRequest, __self__).__init__(
'aws:ec2/spotFleetRequest:SpotFleetRequest',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
allocation_strategy: Optional[pulumi.Input[str]] = None,
client_token: Optional[pulumi.Input[str]] = None,
excess_capacity_termination_policy: Optional[pulumi.Input[str]] = None,
fleet_type: Optional[pulumi.Input[str]] = None,
iam_fleet_role: Optional[pulumi.Input[str]] = None,
instance_interruption_behaviour: Optional[pulumi.Input[str]] = None,
instance_pools_to_use_count: Optional[pulumi.Input[int]] = None,
launch_specifications: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchSpecificationArgs']]]]] = None,
launch_template_configs: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchTemplateConfigArgs']]]]] = None,
load_balancers: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
on_demand_allocation_strategy: Optional[pulumi.Input[str]] = None,
on_demand_max_total_price: Optional[pulumi.Input[str]] = None,
on_demand_target_capacity: Optional[pulumi.Input[int]] = None,
replace_unhealthy_instances: Optional[pulumi.Input[bool]] = None,
spot_maintenance_strategies: Optional[pulumi.Input[pulumi.InputType['SpotFleetRequestSpotMaintenanceStrategiesArgs']]] = None,
spot_price: Optional[pulumi.Input[str]] = None,
spot_request_state: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
target_capacity: Optional[pulumi.Input[int]] = None,
target_group_arns: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
terminate_instances_with_expiration: Optional[pulumi.Input[bool]] = None,
valid_from: Optional[pulumi.Input[str]] = None,
valid_until: Optional[pulumi.Input[str]] = None,
wait_for_fulfillment: Optional[pulumi.Input[bool]] = None) -> 'SpotFleetRequest':
"""
Get an existing SpotFleetRequest resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] allocation_strategy: Indicates how to allocate the target capacity across
the Spot pools specified by the Spot fleet request. The default is
`lowestPrice`.
:param pulumi.Input[str] excess_capacity_termination_policy: Indicates whether running Spot
instances should be terminated if the target capacity of the Spot fleet
request is decreased below the current size of the Spot fleet.
:param pulumi.Input[str] fleet_type: The type of fleet request. Indicates whether the Spot Fleet only requests the target
capacity or also attempts to maintain it. Default is `maintain`.
:param pulumi.Input[str] iam_fleet_role: Grants the Spot fleet permission to terminate
Spot instances on your behalf when you cancel its Spot fleet request using
CancelSpotFleetRequests or when the Spot fleet request expires, if you set
terminateInstancesWithExpiration.
:param pulumi.Input[str] instance_interruption_behaviour: Indicates whether a Spot
instance stops or terminates when it is interrupted. Default is
`terminate`.
:param pulumi.Input[int] instance_pools_to_use_count: The number of Spot pools across which to allocate your target Spot capacity.
Valid only when `allocation_strategy` is set to `lowestPrice`. Spot Fleet selects
the cheapest Spot pools and evenly allocates your target Spot capacity across
the number of Spot pools that you specify.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchSpecificationArgs']]]] launch_specifications: Used to define the launch configuration of the
spot-fleet request. Can be specified multiple times to define different bids
across different markets and instance types. Conflicts with `launch_template_config`. At least one of `launch_specification` or `launch_template_config` is required.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['SpotFleetRequestLaunchTemplateConfigArgs']]]] launch_template_configs: Launch template configuration block. See Launch Template Configs below for more details. Conflicts with `launch_specification`. At least one of `launch_specification` or `launch_template_config` is required.
:param pulumi.Input[Sequence[pulumi.Input[str]]] load_balancers: A list of elastic load balancer names to add to the Spot fleet.
:param pulumi.Input[str] on_demand_allocation_strategy: The order of the launch template overrides to use in fulfilling On-Demand capacity. the possible values are: `lowestPrice` and `prioritized`. the default is `lowestPrice`.
:param pulumi.Input[str] on_demand_max_total_price: The maximum amount per hour for On-Demand Instances that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
:param pulumi.Input[int] on_demand_target_capacity: The number of On-Demand units to request. If the request type is `maintain`, you can specify a target capacity of 0 and add capacity later.
:param pulumi.Input[bool] replace_unhealthy_instances: Indicates whether Spot fleet should replace unhealthy instances. Default `false`.
:param pulumi.Input[pulumi.InputType['SpotFleetRequestSpotMaintenanceStrategiesArgs']] spot_maintenance_strategies: Nested argument containing maintenance strategies for managing your Spot Instances that are at an elevated risk of being interrupted. Defined below.
:param pulumi.Input[str] spot_price: The maximum spot bid for this override request.
:param pulumi.Input[str] spot_request_state: The state of the Spot fleet request.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider .
:param pulumi.Input[int] target_capacity: The number of units to request. You can choose to set the
target capacity in terms of instances or a performance characteristic that is
important to your application workload, such as vCPUs, memory, or I/O.
:param pulumi.Input[Sequence[pulumi.Input[str]]] target_group_arns: A list of `alb.TargetGroup` ARNs, for use with Application Load Balancing.
:param pulumi.Input[bool] terminate_instances_with_expiration: Indicates whether running Spot
instances should be terminated when the Spot fleet request expires.
:param pulumi.Input[str] valid_from: The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately.
:param pulumi.Input[str] valid_until: The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request.
:param pulumi.Input[bool] wait_for_fulfillment: If set, this provider will
wait for the Spot Request to be fulfilled, and will throw an error if the
timeout of 10m is reached.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _SpotFleetRequestState.__new__(_SpotFleetRequestState)
__props__.__dict__["allocation_strategy"] = allocation_strategy
__props__.__dict__["client_token"] = client_token
__props__.__dict__["excess_capacity_termination_policy"] = excess_capacity_termination_policy
__props__.__dict__["fleet_type"] = fleet_type
__props__.__dict__["iam_fleet_role"] = iam_fleet_role
__props__.__dict__["instance_interruption_behaviour"] = instance_interruption_behaviour
__props__.__dict__["instance_pools_to_use_count"] = instance_pools_to_use_count
__props__.__dict__["launch_specifications"] = launch_specifications
__props__.__dict__["launch_template_configs"] = launch_template_configs
__props__.__dict__["load_balancers"] = load_balancers
__props__.__dict__["on_demand_allocation_strategy"] = on_demand_allocation_strategy
__props__.__dict__["on_demand_max_total_price"] = on_demand_max_total_price
__props__.__dict__["on_demand_target_capacity"] = on_demand_target_capacity
__props__.__dict__["replace_unhealthy_instances"] = replace_unhealthy_instances
__props__.__dict__["spot_maintenance_strategies"] = spot_maintenance_strategies
__props__.__dict__["spot_price"] = spot_price
__props__.__dict__["spot_request_state"] = spot_request_state
__props__.__dict__["tags"] = tags
__props__.__dict__["tags_all"] = tags_all
__props__.__dict__["target_capacity"] = target_capacity
__props__.__dict__["target_group_arns"] = target_group_arns
__props__.__dict__["terminate_instances_with_expiration"] = terminate_instances_with_expiration
__props__.__dict__["valid_from"] = valid_from
__props__.__dict__["valid_until"] = valid_until
__props__.__dict__["wait_for_fulfillment"] = wait_for_fulfillment
return SpotFleetRequest(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="allocationStrategy")
def allocation_strategy(self) -> pulumi.Output[Optional[str]]:
"""
Indicates how to allocate the target capacity across
the Spot pools specified by the Spot fleet request. The default is
`lowestPrice`.
"""
return pulumi.get(self, "allocation_strategy")
@property
@pulumi.getter(name="clientToken")
def client_token(self) -> pulumi.Output[str]:
return pulumi.get(self, "client_token")
@property
@pulumi.getter(name="excessCapacityTerminationPolicy")
def excess_capacity_termination_policy(self) -> pulumi.Output[Optional[str]]:
"""
Indicates whether running Spot
instances should be terminated if the target capacity of the Spot fleet
request is decreased below the current size of the Spot fleet.
"""
return pulumi.get(self, "excess_capacity_termination_policy")
@property
@pulumi.getter(name="fleetType")
def fleet_type(self) -> pulumi.Output[Optional[str]]:
"""
The type of fleet request. Indicates whether the Spot Fleet only requests the target
capacity or also attempts to maintain it. Default is `maintain`.
"""
return pulumi.get(self, "fleet_type")
@property
@pulumi.getter(name="iamFleetRole")
def iam_fleet_role(self) -> pulumi.Output[str]:
"""
Grants the Spot fleet permission to terminate
Spot instances on your behalf when you cancel its Spot fleet request using
CancelSpotFleetRequests or when the Spot fleet request expires, if you set
terminateInstancesWithExpiration.
"""
return pulumi.get(self, "iam_fleet_role")
@property
@pulumi.getter(name="instanceInterruptionBehaviour")
def instance_interruption_behaviour(self) -> pulumi.Output[Optional[str]]:
"""
Indicates whether a Spot
instance stops or terminates when it is interrupted. Default is
`terminate`.
"""
return pulumi.get(self, "instance_interruption_behaviour")
@property
@pulumi.getter(name="instancePoolsToUseCount")
def instance_pools_to_use_count(self) -> pulumi.Output[Optional[int]]:
"""
The number of Spot pools across which to allocate your target Spot capacity.
Valid only when `allocation_strategy` is set to `lowestPrice`. Spot Fleet selects
the cheapest Spot pools and evenly allocates your target Spot capacity across
the number of Spot pools that you specify.
"""
return pulumi.get(self, "instance_pools_to_use_count")
@property
@pulumi.getter(name="launchSpecifications")
def launch_specifications(self) -> pulumi.Output[Optional[Sequence['outputs.SpotFleetRequestLaunchSpecification']]]:
"""
Used to define the launch configuration of the
spot-fleet request. Can be specified multiple times to define different bids
across different markets and instance types. Conflicts with `launch_template_config`. At least one of `launch_specification` or `launch_template_config` is required.
"""
return pulumi.get(self, "launch_specifications")
@property
@pulumi.getter(name="launchTemplateConfigs")
def launch_template_configs(self) -> pulumi.Output[Optional[Sequence['outputs.SpotFleetRequestLaunchTemplateConfig']]]:
"""
Launch template configuration block. See Launch Template Configs below for more details. Conflicts with `launch_specification`. At least one of `launch_specification` or `launch_template_config` is required.
"""
return pulumi.get(self, "launch_template_configs")
@property
@pulumi.getter(name="loadBalancers")
def load_balancers(self) -> pulumi.Output[Sequence[str]]:
"""
A list of elastic load balancer names to add to the Spot fleet.
"""
return pulumi.get(self, "load_balancers")
@property
@pulumi.getter(name="onDemandAllocationStrategy")
def on_demand_allocation_strategy(self) -> pulumi.Output[Optional[str]]:
"""
The order of the launch template overrides to use in fulfilling On-Demand capacity. the possible values are: `lowestPrice` and `prioritized`. the default is `lowestPrice`.
"""
return pulumi.get(self, "on_demand_allocation_strategy")
@property
@pulumi.getter(name="onDemandMaxTotalPrice")
def on_demand_max_total_price(self) -> pulumi.Output[Optional[str]]:
"""
The maximum amount per hour for On-Demand Instances that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity.
"""
return pulumi.get(self, "on_demand_max_total_price")
@property
@pulumi.getter(name="onDemandTargetCapacity")
def on_demand_target_capacity(self) -> pulumi.Output[Optional[int]]:
"""
The number of On-Demand units to request. If the request type is `maintain`, you can specify a target capacity of 0 and add capacity later.
"""
return pulumi.get(self, "on_demand_target_capacity")
@property
@pulumi.getter(name="replaceUnhealthyInstances")
def replace_unhealthy_instances(self) -> pulumi.Output[Optional[bool]]:
"""
Indicates whether Spot fleet should replace unhealthy instances. Default `false`.
"""
return pulumi.get(self, "replace_unhealthy_instances")
@property
@pulumi.getter(name="spotMaintenanceStrategies")
def spot_maintenance_strategies(self) -> pulumi.Output[Optional['outputs.SpotFleetRequestSpotMaintenanceStrategies']]:
"""
Nested argument containing maintenance strategies for managing your Spot Instances that are at an elevated risk of being interrupted. Defined below.
"""
return pulumi.get(self, "spot_maintenance_strategies")
@property
@pulumi.getter(name="spotPrice")
def spot_price(self) -> pulumi.Output[Optional[str]]:
"""
The maximum spot bid for this override request.
"""
return pulumi.get(self, "spot_price")
@property
@pulumi.getter(name="spotRequestState")
def spot_request_state(self) -> pulumi.Output[str]:
"""
The state of the Spot fleet request.
"""
return pulumi.get(self, "spot_request_state")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A map of tags to assign to the resource. .If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> pulumi.Output[Mapping[str, str]]:
"""
A map of tags assigned to the resource, including those inherited from the provider .
"""
return pulumi.get(self, "tags_all")
@property
@pulumi.getter(name="targetCapacity")
def target_capacity(self) -> pulumi.Output[int]:
"""
The number of units to request. You can choose to set the
target capacity in terms of instances or a performance characteristic that is
important to your application workload, such as vCPUs, memory, or I/O.
"""
return pulumi.get(self, "target_capacity")
@property
@pulumi.getter(name="targetGroupArns")
def target_group_arns(self) -> pulumi.Output[Sequence[str]]:
"""
A list of `alb.TargetGroup` ARNs, for use with Application Load Balancing.
"""
return pulumi.get(self, "target_group_arns")
@property
@pulumi.getter(name="terminateInstancesWithExpiration")
def terminate_instances_with_expiration(self) -> pulumi.Output[Optional[bool]]:
"""
Indicates whether running Spot
instances should be terminated when the Spot fleet request expires.
"""
return pulumi.get(self, "terminate_instances_with_expiration")
@property
@pulumi.getter(name="validFrom")
def valid_from(self) -> pulumi.Output[Optional[str]]:
"""
The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately.
"""
return pulumi.get(self, "valid_from")
@property
@pulumi.getter(name="validUntil")
def valid_until(self) -> pulumi.Output[Optional[str]]:
"""
The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request.
"""
return pulumi.get(self, "valid_until")
@property
@pulumi.getter(name="waitForFulfillment")
def wait_for_fulfillment(self) -> pulumi.Output[Optional[bool]]:
"""
If set, this provider will
wait for the Spot Request to be fulfilled, and will throw an error if the
timeout of 10m is reached.
"""
return pulumi.get(self, "wait_for_fulfillment")
| 57.005518 | 346 | 0.687016 | 10,995 | 92,976 | 5.590541 | 0.041382 | 0.067466 | 0.063057 | 0.034001 | 0.964925 | 0.959491 | 0.955375 | 0.949177 | 0.943076 | 0.931834 | 0 | 0.008243 | 0.226209 | 92,976 | 1,630 | 347 | 57.040491 | 0.846144 | 0.456483 | 0 | 0.857333 | 1 | 0 | 0.158678 | 0.106825 | 0 | 0 | 0 | 0 | 0 | 1 | 0.168 | false | 0.001333 | 0.009333 | 0.002667 | 0.278667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
81ccd3f0a922a5b585637635adb02e47c5299254 | 11,388 | py | Python | robogym/envs/dactyl/tests/test_rubik_solvers.py | 0xflotus/robogym | 5ec2fcbda9828941fe3072792dd25fb5a915bbbb | [
"MIT"
] | 288 | 2020-11-12T21:39:34.000Z | 2022-03-19T23:27:50.000Z | robogym/envs/dactyl/tests/test_rubik_solvers.py | 0xflotus/robogym | 5ec2fcbda9828941fe3072792dd25fb5a915bbbb | [
"MIT"
] | 3 | 2020-12-12T19:19:30.000Z | 2022-03-24T05:21:39.000Z | robogym/envs/dactyl/tests/test_rubik_solvers.py | 0xflotus/robogym | 5ec2fcbda9828941fe3072792dd25fb5a915bbbb | [
"MIT"
] | 31 | 2020-11-12T22:31:01.000Z | 2022-02-28T20:34:48.000Z | import unittest
import numpy as np
import pytest
from numpy.testing import assert_allclose
from robogym.envs.dactyl.full_perpendicular import make_env
from robogym.utils import rotation
class TestRubikSolvers(unittest.TestCase):
X_AXIS = 0
Y_AXIS = 1
Z_AXIS = 2
NEGATIVE_SIDE = 0
POSITIVE_SIDE = 1
CW = 1
CCW = -1
# Apply B R U rotations to solved cube
scrambles = [
{
"rotation": {
"axis": Z_AXIS,
"side": POSITIVE_SIDE,
"angle": CW * np.pi / 2,
},
"recovery_flips": np.array([[1, 1, 1]]),
},
{
"rotation": {
"axis": X_AXIS,
"side": POSITIVE_SIDE,
"angle": CW * np.pi / 2,
},
"recovery_flips": np.array([[0, 0, 1]]),
},
{
"rotation": {
"axis": Y_AXIS,
"side": POSITIVE_SIDE,
"angle": CW * np.pi / 2,
},
"recovery_flips": np.array([[0, 1, 1]]),
},
]
def test_face_cube_solver(self):
constants = {
"goal_generation": "face_cube_solver",
"num_scramble_steps": 3,
"randomize_face_angles": False,
"randomize": False,
}
env = make_env(constants=constants)
unwrapped = env.unwrapped
# start from deterministic straight qpos
unwrapped.mujoco_simulation.set_qpos("cube_rotation", [1.0, 0.0, 0.0, 0.0])
assert_allclose(
unwrapped.mujoco_simulation.get_qpos("cube_rotation"), [1.0, 0.0, 0.0, 0.0]
)
current_face_rotations = np.zeros(6)
for step in self.scrambles:
rot = step["rotation"]
unwrapped.mujoco_simulation.cube_model.rotate_face(
rot["axis"], rot["side"], rot["angle"]
)
current_face_rotations[rot["axis"] * 2 + rot["side"]] += rot["angle"]
# track remaining face rotations on cube
assert_allclose(
current_face_rotations, [0, np.pi / 2, 0, np.pi / 2, 0, np.pi / 2]
)
unwrapped.reset_goal_generation()
steps_left = len(self.scrambles)
for step in reversed(self.scrambles):
# assert state before recovery flip
_, reached, goal_info = env.unwrapped.goal_info()
assert not reached
assert goal_info["goal_reachable"]
assert goal_info["goal_dist"]["steps_to_solve"] == steps_left
goal = goal_info["goal"]
assert goal["goal_type"] == "flip"
# check if expected quat goal is met
recovery_quat = rotation.apply_euler_rotations(
unwrapped.mujoco_simulation.get_qpos("cube_rotation"),
step["recovery_flips"],
)
assert_allclose(goal["cube_quat"], recovery_quat, atol=1e-8)
assert_allclose(goal["cube_face_angle"], current_face_rotations)
# apply target quat rotation to cube and recompute goal
unwrapped.mujoco_simulation.set_qpos("cube_rotation", recovery_quat)
unwrapped.update_goal_info()
_, reached, info = unwrapped.goal_info()
assert reached
unwrapped.mujoco_simulation.forward()
unwrapped.reset_goal()
solution = step["rotation"]
_, reached, goal_info = env.unwrapped.goal_info()
assert not reached
assert goal_info["goal_reachable"]
assert goal_info["goal_dist"]["steps_to_solve"] == steps_left
goal = goal_info["goal"]
assert goal["goal_type"] == "rotation"
assert goal["axis_nr"] == solution["axis"]
assert goal["axis_sign"][0] == solution["side"]
current_face_rotations[solution["axis"] * 2 + solution["side"]] -= solution[
"angle"
]
assert_allclose(goal["cube_face_angle"], current_face_rotations)
# actually rotate cube in the opposite direction of the original rotation
unwrapped.mujoco_simulation.cube_model.rotate_face(
solution["axis"], solution["side"], -solution["angle"]
)
unwrapped.update_goal_info()
_, reached, info = unwrapped.goal_info()
assert reached
unwrapped.mujoco_simulation.forward()
unwrapped.reset_goal()
steps_left -= 1
assert steps_left == 0
def test_release_cube_solver(self):
constants = {
"goal_generation": "release_cube_solver",
"num_scramble_steps": 3,
"randomize_face_angles": False,
"randomize": False,
}
env = make_env(constants=constants)
unwrapped = env.unwrapped
# start from deterministic straight qpos
unwrapped.mujoco_simulation.set_qpos("cube_rotation", [1.0, 0.0, 0.0, 0.0])
assert_allclose(
unwrapped.mujoco_simulation.get_qpos("cube_rotation"), [1.0, 0.0, 0.0, 0.0]
)
current_face_rotations = np.zeros(6)
for step in self.scrambles:
rot = step["rotation"]
unwrapped.mujoco_simulation.cube_model.rotate_face(
rot["axis"], rot["side"], rot["angle"]
)
current_face_rotations[rot["axis"] * 2 + rot["side"]] += rot["angle"]
# track remaining face rotations on cube
assert_allclose(
current_face_rotations, [0, np.pi / 2, 0, np.pi / 2, 0, np.pi / 2]
)
unwrapped.reset_goal_generation()
steps_left = len(self.scrambles)
for step in reversed(self.scrambles):
# assert state before recovery flip
_, reached, goal_info = env.unwrapped.goal_info()
assert not reached
assert goal_info["goal_reachable"]
assert goal_info["goal_dist"]["steps_to_solve"] == steps_left
goal = goal_info["goal"]
assert goal["goal_type"] == "flip"
# check if expected quat goal is met
recovery_quat = rotation.apply_euler_rotations(
unwrapped.mujoco_simulation.get_qpos("cube_rotation"),
step["recovery_flips"],
)
assert_allclose(goal["cube_quat"], recovery_quat, atol=1e-8)
assert_allclose(goal["cube_face_angle"], current_face_rotations)
# apply target quat rotation to cube and recompute goal
unwrapped.mujoco_simulation.set_qpos("cube_rotation", recovery_quat)
unwrapped.update_goal_info()
_, reached, info = unwrapped.goal_info()
assert reached
unwrapped.mujoco_simulation.forward()
unwrapped.reset_goal()
solution = step["rotation"]
_, reached, goal_info = env.unwrapped.goal_info()
assert not reached
assert goal_info["goal_reachable"]
assert goal_info["goal_dist"]["steps_to_solve"] == steps_left
goal = goal_info["goal"]
assert goal["goal_type"] == "rotation"
assert goal["axis_nr"] == solution["axis"]
assert goal["axis_sign"][0] == solution["side"]
current_face_rotations[solution["axis"] * 2 + solution["side"]] -= solution[
"angle"
]
assert_allclose(goal["cube_face_angle"], current_face_rotations)
# actually rotate cube in the opposite direction of the original rotation
unwrapped.mujoco_simulation.cube_model.rotate_face(
solution["axis"], solution["side"], -solution["angle"]
)
unwrapped.update_goal_info()
_, reached, info = unwrapped.goal_info()
assert reached
unwrapped.mujoco_simulation.forward()
unwrapped.reset_goal()
steps_left -= 1
assert steps_left == 0
_, _, info = unwrapped.goal_info()
assert info["solved"]
unwrapped.mujoco_simulation.forward()
assert info["solved"]
def test_unconstrained_cube_solver(self):
constants = {
"goal_generation": "unconstrained_cube_solver",
"num_scramble_steps": 3,
"randomize_face_angles": False,
"randomize": False,
}
env = make_env(constants=constants)
unwrapped = env.unwrapped
# start from deterministic straight qpos
unwrapped.mujoco_simulation.set_qpos("cube_rotation", [1.0, 0.0, 0.0, 0.0])
assert_allclose(
unwrapped.mujoco_simulation.get_qpos("cube_rotation"), [1.0, 0.0, 0.0, 0.0]
)
current_face_rotations = np.zeros(6)
for step in self.scrambles:
rot = step["rotation"]
unwrapped.mujoco_simulation.cube_model.rotate_face(
rot["axis"], rot["side"], rot["angle"]
)
current_face_rotations[rot["axis"] * 2 + rot["side"]] += rot["angle"]
# track remaining face rotations on cube
assert_allclose(
current_face_rotations, [0, np.pi / 2, 0, np.pi / 2, 0, np.pi / 2]
)
unwrapped.reset_goal_generation()
steps_left = len(self.scrambles)
for step in reversed(self.scrambles):
solution = step["rotation"]
_, reached, goal_info = env.unwrapped.goal_info()
assert not reached
assert goal_info["goal_reachable"]
assert goal_info["goal_dist"]["steps_to_solve"] == steps_left
goal = goal_info["goal"]
assert goal["goal_type"] == "rotation"
current_face_rotations[solution["axis"] * 2 + solution["side"]] -= solution[
"angle"
]
assert_allclose(goal["cube_face_angle"], current_face_rotations)
# actually rotate cube in the opposite direction of the original rotation
unwrapped.mujoco_simulation.cube_model.rotate_face(
solution["axis"], solution["side"], -solution["angle"]
)
unwrapped.update_goal_info()
_, reached, info = unwrapped.goal_info()
assert reached
unwrapped.mujoco_simulation.forward()
unwrapped.reset_goal()
steps_left -= 1
assert steps_left == 0
@pytest.mark.parametrize("axis", [0, 1, 2])
@pytest.mark.parametrize("side", [0, 1])
@pytest.mark.parametrize("rot_direction", [-1, 1])
def test_unconstrained_cube_solver(axis, side, rot_direction):
constants = {
"goal_generation": "unconstrained_cube_solver",
"num_scramble_steps": 0,
"randomize_face_angles": False,
"randomize": False,
}
env = make_env(constants=constants)
unwrapped = env.unwrapped
# Rotate each face and make sure goal generator is able to solve the cube in one step
unwrapped.mujoco_simulation.cube_model.rotate_face(
axis, side, np.pi / 2 * rot_direction
)
unwrapped.reset_goal_generation()
_, _, goal_info = env.unwrapped.goal_info()
assert goal_info["goal_reachable"]
assert goal_info["goal_dist"]["steps_to_solve"] == 1
goal = goal_info["goal"]
assert goal["goal_type"] == "rotation"
assert_allclose(goal["cube_face_angle"], np.zeros(6))
| 35.36646 | 89 | 0.580611 | 1,258 | 11,388 | 5.004769 | 0.104134 | 0.052097 | 0.014295 | 0.015248 | 0.894536 | 0.882306 | 0.864358 | 0.852922 | 0.852922 | 0.837357 | 0 | 0.015277 | 0.310239 | 11,388 | 321 | 90 | 35.476636 | 0.786251 | 0.071654 | 0 | 0.724696 | 0 | 0 | 0.126303 | 0.012697 | 0 | 0 | 0 | 0 | 0.210526 | 1 | 0.016194 | false | 0 | 0.024292 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c4901baa263fc8d934bc78a5a290a78090d1aa05 | 2,004 | py | Python | tests/test_conv_blocks.py | gle-bellier/expressive-perf | e99d49c67ecc7b26b48834207d41113c4d9ea5dc | [
"MIT"
] | 1 | 2022-03-29T09:36:38.000Z | 2022-03-29T09:36:38.000Z | tests/test_conv_blocks.py | gle-bellier/expressive-perf | e99d49c67ecc7b26b48834207d41113c4d9ea5dc | [
"MIT"
] | null | null | null | tests/test_conv_blocks.py | gle-bellier/expressive-perf | e99d49c67ecc7b26b48834207d41113c4d9ea5dc | [
"MIT"
] | null | null | null | import pytest
import torch
from perf_gan.models.blocks.conv_blocks import ConvBlock
from perf_gan.models.perf_gan.u_block import UBlock
from perf_gan.models.perf_gan.d_block import DBlock
def test_dims_conv1d():
batch_size = 10
in_channels = 32
length = 100
out_channels = 64
in_c = torch.randn((
batch_size,
in_channels,
length,
))
assert ConvBlock(in_channels, out_channels,
dilation=3)(in_c).shape == (batch_size, out_channels,
length)
def test_upsampling_block():
batch_size = 10
in_channels = 32
length = 100
out_channels = 64
dilation = 3
x = torch.randn((
batch_size,
in_channels,
length,
))
ublock = UBlock(in_channels, out_channels, dilation=dilation)
assert ublock(x).shape == (batch_size, out_channels, length * 2)
def test_last_upsampling_block():
batch_size = 10
in_channels = 32
length = 100
out_channels = 64
dilation = 3
x = torch.randn((
batch_size,
in_channels,
length,
))
ublock = UBlock(in_channels, out_channels, dilation=dilation, last=True)
assert ublock(x).shape == (batch_size, out_channels, length)
def test_downsampling_block():
batch_size = 10
in_channels = 32
length = 100
out_channels = 64
dilation = 3
x = torch.randn((
batch_size,
in_channels,
length,
))
ublock = DBlock(in_channels, out_channels, dilation=dilation)
assert ublock(x).shape == (batch_size, out_channels, length // 2)
def test_first_downsampling_block():
batch_size = 10
in_channels = 32
length = 100
out_channels = 64
dilation = 3
x = torch.randn((
batch_size,
in_channels,
length,
))
ublock = DBlock(in_channels, out_channels, dilation=dilation, first=True)
assert ublock(x).shape == (batch_size, out_channels, length) | 21.094737 | 77 | 0.62525 | 249 | 2,004 | 4.763052 | 0.168675 | 0.113828 | 0.046374 | 0.054806 | 0.86172 | 0.837268 | 0.796796 | 0.767285 | 0.729342 | 0.729342 | 0 | 0.037141 | 0.287924 | 2,004 | 95 | 78 | 21.094737 | 0.793973 | 0 | 0 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c4b3a6bd91180d97ef0cfc55a61ab96aa205e540 | 14,909 | py | Python | robi/robi24M.py | uannabi/Analysis | ad05fc2530fd2f72d143a29c48d04b16d4241d0d | [
"Apache-2.0"
] | 1 | 2022-01-26T12:27:03.000Z | 2022-01-26T12:27:03.000Z | robi/robi24M.py | uannabi/Analysis | ad05fc2530fd2f72d143a29c48d04b16d4241d0d | [
"Apache-2.0"
] | null | null | null | robi/robi24M.py | uannabi/Analysis | ad05fc2530fd2f72d143a29c48d04b16d4241d0d | [
"Apache-2.0"
] | null | null | null | # pkg_list=com.databricks:spark-avro_2.11:4.0.0,org.apache.hadoop:hadoop-aws:2.7.1
# pyspark --packages $pkg_list --driver-memory 30G --driver-cores 5 --num-executors 20 --executor-memory 30G --executor-cores 5 --conf spark.driver.maxResultSize=0 --conf spark.yarn.maxAppAttempts=1 --conf s7park.ui.port=10045
#
from pyspark import SparkContext, SparkConf, HiveContext
from pyspark.sql.functions import *
import pyspark.sql.functions as F
import sys
country='BD
time='monthly'
months =[ '202001',
'202002',
'202003',
'202004',
'202005',
'202006',
'202007',
'202008',
'202009',
'202010',
'202011',
'202012',
'202101',
'202102',
'202103',
'202104',
'202105',
'202106',
'202107',
'202108',
'202109',
'202110',
'202111',
'202112',
]
Month2=['202108',
'202109',
'202110',
'202111',
'202112',
]
path = '/result/2022/Robi/DSR-623/raw/'
for m in months:
df=spark.read.parquet('etl/data/brq/sub/connection/'+time+'/'+country+'/'+m+'/*.parquet')
connection = df.select('ifa', F.explode('req_connection.req_carrier_name').alias('telco'))
robi_user = connection.filter(connection['telco'] == 'Robi/Aktel').distinct()
print('Robi User {}'.format(robi_user.count()))
robi_user.write.parquet(path+"robi/"+m)
print('Robi written on Done! for {}'.format(m))
gp_user = connection.filter(connection['telco'] == 'GrameenPhone').distinct()
print('GrameenPhone User {}'.format(gp_user.count()))
gp_user.write.parquet(path+"gp/"+m)
print('GP written on Done! for {}'.format(m))
bl_user = connection.filter(connection['telco'] == 'Orascom/Banglalink').distinct()
print('Banglalink User {}'.format(bl_user.count()))
bl_user.write.parquet(path + "bl/" + m)
print('BL written on Done! for {}'.format(m))
for m in Month2:
print(m)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/robi/'+m+'/*.parquet')
df_gender = spark.read.parquet('etl/table/brq/sub/demographics/monthly/'+country+'/'+m+'/gender/*.parquet')
df_gender = df_gender.withColumnRenamed('prediction', 'gender')
df_gender_segment = geo.join(df_gender, ['ifa'], how='left')
df_gender_segment.groupBy('gender').agg(F.countDistinct('ifa')).orderBy('gender').show(200,0)
for m in Month2:
print(m)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/gp/'+m+'/*.parquet')
df_gender = spark.read.parquet('etl/table/brq/sub/demographics/monthly/'+country+'/'+m+'/gender/*.parquet')
df_gender = df_gender.withColumnRenamed('prediction', 'gender')
df_gender_segment = geo.join(df_gender, ['ifa'], how='left')
df_gender_segment.groupBy('gender').agg(F.countDistinct('ifa')).orderBy('gender').show(200,0)
for m in Month2:
print(m)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/bl/'+m+'/*.parquet')
df_gender = spark.read.parquet('etl/table/brq/sub/demographics/monthly/'+country+'/'+m+'/gender/*.parquet')
df_gender = df_gender.withColumnRenamed('prediction', 'gender')
df_gender_segment = geo.join(df_gender, ['ifa'], how='left')
df_gender_segment.groupBy('gender').agg(F.countDistinct('ifa')).orderBy('gender').show(200,0)
for i in months:
print(i)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/robi/'+m+'/*.parquet')
df_age = spark.read.parquet('etl/table/brq/sub/demographics/monthly/'+country+'/'+m+'/age/*.parquet')
df_age = df_age.drop('prediction')
df_age = df_age.withColumnRenamed('label', 'age')
df_age_segment = geo.join(df_age, ['ifa'], how='left')
df_age_segment.groupBy('age').agg(F.countDistinct('ifa')).orderBy('age').show(200,0)
for i in months:
print(i)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/gp/'+m+'/*.parquet')
df_age = spark.read.parquet('etl/table/brq/sub/demographics/monthly/'+country+'/'+m+'/age/*.parquet')
df_age = df_age.drop('prediction')
df_age = df_age.withColumnRenamed('label', 'age')
df_age_segment = geo.join(df_age, ['ifa'], how='left')
df_age_segment.groupBy('age').agg(F.countDistinct('ifa')).orderBy('age').show(200,0)
for i in months:
print(i)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/bl/'+m+'/*.parquet')
df_age = spark.read.parquet('etl/table/brq/sub/demographics/monthly/'+country+'/'+m+'/age/*.parquet')
df_age = df_age.drop('prediction')
df_age = df_age.withColumnRenamed('label', 'age')
df_age_segment = geo.join(df_age, ['ifa'], how='left')
df_age_segment.groupBy('age').agg(F.countDistinct('ifa')).orderBy('age').show(200,0)
for i in Month2:
print(i)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/robi/'+m+'/*.parquet')
df_age = spark.read.parquet('etl/table/brq/sub/demographics/monthly/'+country+'/'+m+'/age/*.parquet')
df_age = df_age.withColumnRenamed('prediction', 'age')
df_age_segment = geo.join(df_age, ['ifa'], how='left')
df_age_segment.groupBy('age').agg(F.countDistinct('ifa')).orderBy('age').show(200,0)
for i in Month2:
print(i)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/gp/'+m+'/*.parquet')
df_age = spark.read.parquet('etl/table/brq/sub/demographics/monthly/'+country+'/'+m+'/age/*.parquet')
df_age = df_age.withColumnRenamed('prediction', 'age')
df_age_segment = geo.join(df_age, ['ifa'], how='left')
df_age_segment.groupBy('age').agg(F.countDistinct('ifa')).orderBy('age').show(200,0)
for i in Month2:
print(i)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/bl/'+m+'/*.parquet')
df_age = spark.read.parquet('etl/table/brq/sub/demographics/monthly/'+country+'/'+m+'/age/*.parquet')
df_age = df_age.withColumnRenamed('prediction', 'age')
df_age_segment = geo.join(df_age, ['ifa'], how='left')
df_age_segment.groupBy('age').agg(F.countDistinct('ifa')).orderBy('age').show(200,0)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/robi/202107/*.parquet')
device = spark.read.parquet('etl/data/brq/sub/device/monthly/'+country+'/202107/*.parquet')
for m in months:
print(m)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/robi/'+m+'/*.parquet')
device = spark.read.parquet('etl/data/brq/sub/device/monthly/'+country+'/'+m+'/*.parquet')
device = device.select('ifa', 'device.pricegrade')
dev = geo.join(device, on='ifa')
aff_co = dev.withColumn("affluence", F.lit(None))
new = aff_co.withColumn('affluence', F.when((aff_co.pricegrade ==1) , 'high').otherwise(aff_co.affluence))
new = new.withColumn('affluence', F.when((aff_co.pricegrade ==2) , 'mid').otherwise(new.affluence))
new = new.withColumn('affluence', F.when((aff_co.pricegrade == 3), 'low').otherwise(new.affluence))
final = new.na.fill('Unknow', 'affluence')
final = final.groupBy('affluence').agg(F.countDistinct('ifa').alias('count')).sort('count', ascending = False)
final.show(100,0)
for m in months:
print(m)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/gp/'+m+'/*.parquet')
device = spark.read.parquet('etl/data/brq/sub/device/monthly/'+country+'/'+m+'/*.parquet')
device = device.select('ifa', 'device.pricegrade')
dev = geo.join(device, on='ifa')
aff_co = dev.withColumn("affluence", F.lit(None))
new = aff_co.withColumn('affluence', F.when((aff_co.pricegrade ==1) , 'high').otherwise(aff_co.affluence))
new = new.withColumn('affluence', F.when((aff_co.pricegrade ==2) , 'mid').otherwise(new.affluence))
new = new.withColumn('affluence', F.when((aff_co.pricegrade == 3), 'low').otherwise(new.affluence))
final = new.na.fill('Unknow', 'affluence')
final = final.groupBy('affluence').agg(F.countDistinct('ifa').alias('count')).sort('count', ascending = False)
final.show(100,0)
for m in months:
print(m)
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/bl/'+m+'/*.parquet')
device = spark.read.parquet('etl/data/brq/sub/device/monthly/'+country+'/'+m+'/*.parquet')
device = device.select('ifa', 'device.pricegrade')
dev = geo.join(device, on='ifa')
aff_co = dev.withColumn("affluence", F.lit(None))
new = aff_co.withColumn('affluence', F.when((aff_co.pricegrade ==1) , 'high').otherwise(aff_co.affluence))
new = new.withColumn('affluence', F.when((aff_co.pricegrade ==2) , 'mid').otherwise(new.affluence))
new = new.withColumn('affluence', F.when((aff_co.pricegrade == 3), 'low').otherwise(new.affluence))
final = new.na.fill('Unknow', 'affluence')
final = final.groupBy('affluence').agg(F.countDistinct('ifa').alias('count')).sort('count', ascending = False)
final.show(100,0)
master_df = spark.read.csv('reference/app/master_all/all/all/all/app.csv', header=True)
level_df = spark.read.csv('reference/app/app_level/all/all/all/app_level.csv', header=True)
lifestage_df = spark.read.csv('reference/app/lifestage/all/all/all/app_lifestage.csv', header=True)
join_df1 = master_df.join(level_df, on='app_level_id', how='left').cache()
join_df2 = join_df1.join(lifestage_df, on='app_lifestage_id', how='left').cache()
select_columns = ['bundle','app_l1_name','app_l2_name','app_l3_name','lifestage_name']
finalapp_df = join_df2.select(*select_columns)
for m in months:
print(m)
brq = spark.read.parquet('etl/data/brq/agg/agg_brq/monthly/'+country+'/'+m+'/*.parquet')
brq2 = brq.select('ifa', explode('app')).select('ifa', 'col.*')
app = brq2.join(finalapp_df, on='bundle', how='left').cache()
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/robi/'+m+'/*.parquet')
persona_app = geo.join(app, on ='ifa')
freq_beh = persona_app.groupBy('app_l1_name').agg(F.countDistinct('ifa').alias('freq')).sort('freq', ascending = False)
freq_beh1 = freq_beh.filter(freq_beh['app_l1_name'] != 'null')
freq_beh1.show(20,0)
for m in months:
print(m)
brq = spark.read.parquet('etl/data/brq/agg/agg_brq/monthly/'+country+'/'+m+'/*.parquet')
brq2 = brq.select('ifa', explode('app')).select('ifa', 'col.*')
app = brq2.join(finalapp_df, on='bundle', how='left').cache()
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/gp/'+m+'/*.parquet')
persona_app = geo.join(app, on ='ifa')
freq_beh = persona_app.groupBy('app_l1_name').agg(F.countDistinct('ifa').alias('freq')).sort('freq', ascending = False)
freq_beh1 = freq_beh.filter(freq_beh['app_l1_name'] != 'null')
freq_beh1.show(20,0)
for m in months:
print(m)
brq = spark.read.parquet('etl/data/brq/agg/agg_brq/monthly/'+country+'/'+m+'/*.parquet')
brq2 = brq.select('ifa', explode('app')).select('ifa', 'col.*')
app = brq2.join(finalapp_df, on='bundle', how='left').cache()
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/bl/'+m+'/*.parquet')
persona_app = geo.join(app, on ='ifa')
freq_beh = persona_app.groupBy('app_l1_name').agg(F.countDistinct('ifa').alias('freq')).sort('freq', ascending = False)
freq_beh1 = freq_beh.filter(freq_beh['app_l1_name'] != 'null')
freq_beh1.show(20,0)
for m in months:
print(m)
brq = spark.read.parquet('etl/data/brq/agg/agg_brq/monthly/'+country+'/'+m+'/*.parquet')
brq2 = brq.select('ifa', explode('app')).select('ifa', 'col.*')
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/robi/' + m + '/*.parquet')
persona_app = geo.join(app, on ='ifa')
top_app = persona_app.groupBy('asn').agg(F.countDistinct('ifa').alias('freq')).sort('freq', ascending = False)
top_app = top_app.filter(top_app['asn'] != 'null')
top_app.show(20,0)
for m in months:
print(m)
brq = spark.read.parquet('etl/data/brq/agg/agg_brq/monthly/'+country+'/'+m+'/*.parquet')
brq2 = brq.select('ifa', explode('app')).select('ifa', 'col.*')
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/gp/' + m + '/*.parquet')
persona_app = geo.join(app, on ='ifa')
top_app = persona_app.groupBy('asn').agg(F.countDistinct('ifa').alias('freq')).sort('freq', ascending = False)
top_app = top_app.filter(top_app['asn'] != 'null')
top_app.show(20,0)
for m in months:
print(m)
brq = spark.read.parquet('etl/data/brq/agg/agg_brq/monthly/'+country+'/'+m+'/*.parquet')
brq2 = brq.select('ifa', explode('app')).select('ifa', 'col.*')
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/bl/' + m + '/*.parquet')
persona_app = geo.join(app, on ='ifa')
top_app = persona_app.groupBy('asn').agg(F.countDistinct('ifa').alias('freq')).sort('freq', ascending = False)
top_app = top_app.filter(top_app['asn'] != 'null')
top_app.show(20,0)
for m in months:
print(m)
df = spark.read.parquet('etl/data/brq/sub/connection/monthly/'+country+'/'+m+'/*.parquet')
df2 = df.select('ifa', explode('mm_connection')).select('ifa', 'col.*')
df3 = df2.select('ifa', 'mm_con_type_desc', 'mm_carrier_name')
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/robi/'+m+'/*.parquet')
df4=geo.join(df3,on='ifa',how='left')
dual_simmers = df4.filter(col('mm_con_type_desc') == 'Cellular').select('ifa', 'mm_carrier_name').distinct()
dual_simmers = dual_simmers.groupBy('ifa').agg(countDistinct('mm_carrier_name').alias('sims'))
dual_simmers = dual_simmers.filter(col('sims') > 1).withColumn('dual_sim', F.lit(1))
dual_simmers.groupBy('dual_sim', 'sims').agg(F.countDistinct('ifa').alias('freq')).sort('freq',ascending=False).show(10,False)
for m in months:
print(m)
df = spark.read.parquet('etl/data/brq/sub/connection/monthly/'+country+'/'+m+'/*.parquet')
df2 = df.select('ifa', explode('mm_connection')).select('ifa', 'col.*')
df3 = df2.select('ifa', 'mm_con_type_desc', 'mm_carrier_name')
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/gp/'+m+'/*.parquet')
df4=geo.join(df3,on='ifa',how='left')
dual_simmers = df4.filter(col('mm_con_type_desc') == 'Cellular').select('ifa', 'mm_carrier_name').distinct()
dual_simmers = dual_simmers.groupBy('ifa').agg(countDistinct('mm_carrier_name').alias('sims'))
dual_simmers = dual_simmers.filter(col('sims') > 1).withColumn('dual_sim', F.lit(1))
dual_simmers.groupBy('dual_sim', 'sims').agg(F.countDistinct('ifa').alias('freq')).sort('freq',ascending=False).show(10,False)
for m in months:
print(m)
df = spark.read.parquet('etl/data/brq/sub/connection/monthly/'+country+'/'+m+'/*.parquet')
df2 = df.select('ifa', explode('mm_connection')).select('ifa', 'col.*')
df3 = df2.select('ifa', 'mm_con_type_desc', 'mm_carrier_name')
geo = spark.read.parquet('/result/2022/Robi/DSR-623/raw/bl/'+m+'/*.parquet')
df4=geo.join(df3,on='ifa',how='left')
dual_simmers = df4.filter(col('mm_con_type_desc') == 'Cellular').select('ifa', 'mm_carrier_name').distinct()
dual_simmers = dual_simmers.groupBy('ifa').agg(countDistinct('mm_carrier_name').alias('sims'))
dual_simmers = dual_simmers.filter(col('sims') > 1).withColumn('dual_sim', F.lit(1))
dual_simmers.groupBy('dual_sim', 'sims').agg(F.countDistinct('ifa').alias('freq')).sort('freq',ascending=False).show(10,False)
| 49.20462 | 226 | 0.66973 | 2,208 | 14,909 | 4.402174 | 0.091033 | 0.044444 | 0.074074 | 0.040226 | 0.873457 | 0.856481 | 0.838992 | 0.838992 | 0.838992 | 0.834774 | 0 | 0.038691 | 0.112415 | 14,909 | 302 | 227 | 49.36755 | 0.695836 | 0.020457 | 0 | 0.793774 | 0 | 0 | 0.286752 | 0.121798 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.015564 | null | null | 0.105058 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c4d78cbfda90f3ddb75fd28c65bdd62dde03bc45 | 22,742 | py | Python | tests/unittests/test_public_pages/test_views.py | eugeneandrienko/PyArtistsGallery | b75114955859d45d9dfb5c901213f25a6e09f488 | [
"MIT"
] | null | null | null | tests/unittests/test_public_pages/test_views.py | eugeneandrienko/PyArtistsGallery | b75114955859d45d9dfb5c901213f25a6e09f488 | [
"MIT"
] | null | null | null | tests/unittests/test_public_pages/test_views.py | eugeneandrienko/PyArtistsGallery | b75114955859d45d9dfb5c901213f25a6e09f488 | [
"MIT"
] | null | null | null | """Tests for views for public pages."""
import unittest
from unittest.mock import patch
from unittest.mock import MagicMock
from jinja2 import TemplateNotFound
from pagapp.public_pages.views import index, album, login
class IndexTestCase(unittest.TestCase):
"""Tests for index() function."""
@patch('pagapp.public_pages.views.render_template')
def test_index(self, mock_render_template):
"""Test for index() function.
Test case:
We should return result of render_template() call if
render_template() did not returns any exception.
"""
path_to_configuration = 'pagapp.public_pages.views.Configuration'
path_to_albums = 'pagapp.public_pages.views.Albums'
path_first_run = 'pagapp.public_pages.views.is_first_run'
path_is_upgrade_ready = 'pagapp.public_pages.views.is_upgrade_ready'
with patch(path_to_configuration) as mock_configuration, \
patch(path_to_albums) as mock_albums, \
patch(path_first_run) as mock_first_run, \
patch(path_is_upgrade_ready) as mock_is_upgrade_ready:
mock_first_result = MagicMock()
mock_first_result.gallery_title = 'test'
mock_configuration.query.first.return_value = mock_first_result
test_render_template = 'render_template'
mock_render_template.return_value = test_render_template
mock_albums.get_albums_list.return_value = 'test'
mock_first_run.return_value = False
mock_is_upgrade_ready.return_value = False
self.assertEqual(index(), test_render_template,
msg="render_template() should be called!")
@patch('pagapp.public_pages.views.current_app')
@patch('pagapp.public_pages.views.abort')
@patch('pagapp.public_pages.views.render_template')
def test_index_no_template(self, mock_render_template, mock_abort,
mock_app):
"""Test for index() function.
Test case:
If render_template() raise TemplateNotFound - function
should call abort(404) function.
"""
test_abort = 'test_abort'
mock_abort.return_value = test_abort
mock_render_template.side_effect = TemplateNotFound(name='test')
path_to_configuration = 'pagapp.public_pages.views.Configuration'
path_to_albums = 'pagapp.public_pages.views.Albums'
path_to_first_run = 'pagapp.public_pages.views.is_first_run'
path_is_upgrade_ready = 'pagapp.public_pages.views.is_upgrade_ready'
with patch(path_to_albums) as mock_albums, \
patch(path_to_configuration) as mock_configuration, \
patch(path_to_first_run) as mock_first_run, \
patch(path_is_upgrade_ready) as mock_is_upgrade_ready:
mock_first_result = MagicMock()
mock_first_result.gallery_title = 'test'
mock_configuration.query.first.return_value = mock_first_result
mock_albums.get_albums_list.return_value = 'test'
mock_first_run.return_value = False
mock_is_upgrade_ready.return_value = False
index()
self.assertTrue(mock_abort.called,
msg="abort(404) should be called!")
del mock_app
def test_index_first_run(self):
"""Test for index() function.
Test case:
Function should make redirect to corresponding service page
if it is first run of function.
"""
path_to_first_run = 'pagapp.public_pages.views.is_first_run'
path_to_app = 'pagapp.public_pages.views.current_app'
path_to_url_for = 'pagapp.public_pages.views.url_for'
path_to_redirect = 'pagapp.public_pages.views.redirect'
with patch(path_to_first_run) as mock_first_run, \
patch(path_to_app) as mock_app, \
patch(path_to_url_for) as mock_url_for, \
patch(path_to_redirect) as mock_redirect:
mock_first_run.return_value = True
index()
self.assertTrue(mock_app.logger.info.called)
self.assertTrue(mock_url_for.called)
self.assertTrue(mock_redirect.called)
@patch('pagapp.public_pages.views.Configuration')
@patch('pagapp.public_pages.views.Albums')
def test_index_update_db(self, mock_configuration, mock_albums):
"""Test for index() function.
Test case:
Function should perform database update if it is ready.
"""
path_to_first_run = 'pagapp.public_pages.views.is_first_run'
path_to_upgrade_database = 'pagapp.public_pages.views.upgrade_database'
path_to_render_template = 'pagapp.public_pages.views.render_template'
path_to_flash = 'pagapp.public_pages.views.flash'
path_to_is_upgrade_ready = 'pagapp.public_pages.views.is_upgrade_ready'
with patch(path_to_first_run) as mock_first_run, \
patch(path_to_upgrade_database) as mock_upgrade_database, \
patch(path_to_render_template) as mock_render_template, \
patch(path_to_flash) as mock_flash, \
patch(path_to_is_upgrade_ready) as mock_is_upgrade_ready:
mock_first_run.return_value = False
mock_is_upgrade_ready.return_value = True
index()
self.assertTrue(mock_upgrade_database.called)
self.assertTrue(mock_flash.called)
self.assertTrue(mock_render_template.called)
del mock_configuration
del mock_albums
class AlbumTestCase(unittest.TestCase):
"""Tests for album() function."""
@patch('pagapp.public_pages.views.render_template')
def test_album(self, mock_render_template):
"""Test for album() function.
Test case:
If album with given album_url is exists - function should
return render_template() result.
"""
test_album_url = 'album_url'
test_filter_by = 'test_filter_by'
mock_filter_by_result = MagicMock()
mock_filter_by_result.first.result_value = test_filter_by
path_to_pictures = 'pagapp.public_pages.views.Pictures'
path_to_albums = 'pagapp.public_pages.views.Albums'
path_to_configuration = 'pagapp.public_pages.views.Configuration'
with patch(path_to_pictures) as mock_pictures, \
patch(path_to_albums) as mock_albums, \
patch(path_to_configuration) as mock_configuration:
mock_first_result = MagicMock()
mock_first_result.gallery_title = 'test'
mock_configuration.query.first.return_value = mock_first_result
mock_albums.query.filter_by.return_value = mock_filter_by_result
mock_pictures.query.filter_by.return_value = test_filter_by
test_render_template_result = 'render_template_result'
mock_render_template.return_value = test_render_template_result
self.assertEqual(album(test_album_url), test_render_template_result,
msg="render_template() should be called!")
@patch('pagapp.public_pages.views.current_app')
@patch('pagapp.public_pages.views.abort')
@patch('pagapp.public_pages.views.render_template')
def test_album_no_template(self, mock_render_template, mock_abort,
mock_app):
"""Test for album() function.
Test case:
If album is exists but HTML template does not exists - function
should call abort(404).
"""
test_album_url = 'album_url'
test_filter_by = 'test_filter_by'
mock_filter_by_result = MagicMock()
mock_filter_by_result.first.result_value = test_filter_by
path_to_albums = 'pagapp.public_pages.views.Albums'
path_to_pictures = 'pagapp.public_pages.views.Pictures'
path_to_configuration = 'pagapp.public_pages.views.Configuration'
with patch(path_to_albums) as mock_albums, \
patch(path_to_pictures) as mock_pictures, \
patch(path_to_configuration) as mock_configuration:
mock_albums.query.filter_by.return_value = mock_filter_by_result
mock_pictures.query.filter_by.return_value = test_filter_by
mock_first_result = MagicMock()
mock_first_result.gallery_title = 'test'
mock_configuration.query.first.return_value = mock_first_result
mock_render_template.side_effect = TemplateNotFound(name='test')
album(test_album_url)
self.assertTrue(mock_abort.called, msg="abort() not called!")
del mock_app
@patch('pagapp.public_pages.views.current_app')
@patch('pagapp.public_pages.views.url_for')
@patch('pagapp.public_pages.views.redirect')
@patch('pagapp.public_pages.views.flash')
def test_album_not_exists(self, mock_flash, mock_redirect, mock_url_for,
mock_app):
"""Test for album() function.
Test case:
If album with given album_url does not exists - function should
catch AttributeError exception and call flash() and redirect()
function.
"""
test_album_url = 'album_url'
test_filter_by = 'test_filter_by'
mock_filter_by_result = MagicMock()
mock_filter_by_result.first.result_value = test_filter_by
path_to_albums = 'pagapp.public_pages.views.Albums'
path_to_pictures = 'pagapp.public_pages.views.Pictures'
with patch(path_to_albums) as mock_albums, \
patch(path_to_pictures) as mock_pictures:
mock_albums.query.filter_by.return_value = mock_filter_by_result
mock_pictures.query.filter_by.return_value = test_filter_by
mock_albums.query.filter_by.side_effect = AttributeError()
redirect_result = 'test redirect'
url_for_result = 'test url for'
mock_redirect.return_value = redirect_result
mock_url_for.return_value = url_for_result
self.assertEqual(album(test_album_url), redirect_result,
msg="redirect() should be called!")
self.assertTrue(mock_flash.called, msg="flash() does not called!")
del mock_app
class LoginTestCase(unittest.TestCase):
"""Tests for login() function."""
@patch('pagapp.public_pages.views.current_app')
@patch('pagapp.public_pages.views.redirect')
@patch('pagapp.public_pages.views.url_for')
@patch('pagapp.public_pages.views.login_user')
@patch('pagapp.public_pages.views.request')
def test_login(self, mock_request, mock_login_user, mock_url_for,
mock_redirect, mock_app):
"""Test for login() function.
Test case:
If request.method equals with 'POST' and login form validated
successfully - login_user() should be called and function
should return result of redirect() call.
"""
mock_filter_by_result = MagicMock()
mock_filter_by_result.first.return_value = 'test'
path_to_users = 'pagapp.public_pages.views.Users'
path_to_current_user = 'pagapp.public_pages.views.current_user'
path_to_login_form = 'pagapp.public_pages.views.LoginForm'
with patch(path_to_users) as mock_users, \
patch(path_to_current_user) as mock_user, \
patch(path_to_login_form) as mock_login_form:
mock_request.method = 'POST'
mock_request.args.get.return_value = None
mock_login_form.return_value.validate.return_value = True
mock_login_form.return_value.login.data = 'test'
mock_user.is_authenticated.return_value = False
mock_users.query.filter_by.return_value = mock_filter_by_result
redirect_result = 'test redirect'
mock_redirect.return_value = redirect_result
self.assertEqual(login(), redirect_result,
msg="redirect() should be called")
self.assertTrue(mock_login_user.called,
msg="login_user() should be called!")
self.assertTrue(mock_url_for.called,
msg="url_for() should be called!")
del mock_app
@patch('pagapp.public_pages.views.current_app')
@patch('pagapp.public_pages.views.render_template')
@patch('pagapp.public_pages.views.flash_form_errors')
@patch('pagapp.public_pages.views.request')
def test_login_not_post(self, mock_request, mock_flash_form_errors,
mock_render_template, mock_app):
"""Test for login() function.
Test case:
If request.method not equals with 'POST' - flash_form_errors() should
be called. Also function should return render_template() call result.
"""
path_to_users = 'pagapp.public_pages.views.Users'
path_to_configuration = 'pagapp.public_pages.views.Configuration'
path_to_login_form = 'pagapp.public_pages.views.LoginForm'
path_to_current_user = 'pagapp.public_pages.views.current_user'
with patch(path_to_users) as mock_users, \
patch(path_to_configuration) as mock_configuration, \
patch(path_to_login_form) as mock_login_form, \
patch(path_to_current_user) as mock_user:
mock_filter_by_result = MagicMock()
mock_first_result = MagicMock()
mock_filter_by_result.first.return_value = 'test'
mock_first_result.gallery_title = 'test'
mock_users.query.filter_by.return_value = mock_filter_by_result
mock_configuration.query.first.return_value = mock_first_result
mock_user.is_authenticated.return_value = False
mock_request.method = 'GET'
mock_login_form.return_value.validate.return_value = True
mock_login_form.return_value.login.data = 'test'
render_template_result = 'render_template result'
mock_render_template.return_value = render_template_result
self.assertEqual(login(), render_template_result,
msg="render_template() should be called")
self.assertTrue(mock_flash_form_errors.called)
del mock_app
@patch('pagapp.public_pages.views.current_app')
@patch('pagapp.public_pages.views.render_template')
@patch('pagapp.public_pages.views.flash_form_errors')
@patch('pagapp.public_pages.views.request')
def test_login_not_validate(self, mock_request, mock_flash_form_errors,
mock_render_template, mock_app):
"""Test for login() function.
Test case:
If login form does not validate - flash_form_errors() should
be called. Also function should return render_template() call result.
"""
path_to_users = 'pagapp.public_pages.views.Users'
path_to_login_form = 'pagapp.public_pages.views.LoginForm'
path_to_configuration = 'pagapp.public_pages.views.Configuration'
path_to_current_user = 'pagapp.public_pages.views.current_user'
with patch(path_to_users) as mock_users, \
patch(path_to_login_form) as mock_login_form, \
patch(path_to_configuration) as mock_configuration, \
patch(path_to_current_user) as mock_user:
mock_first_result = MagicMock()
mock_filter_by_result = MagicMock()
mock_filter_by_result.first.return_value = 'test'
mock_first_result.gallery_title = 'test'
mock_users.query.filter_by.return_value = mock_filter_by_result
mock_configuration.query.first.return_value = mock_first_result
mock_user.is_authenticated.return_value = False
mock_request.method = 'POST'
mock_login_form.return_value.validate.return_value = False
mock_login_form.return_value.login.data = 'test'
render_template_result = 'render_template result'
mock_render_template.return_value = render_template_result
self.assertEqual(login(), render_template_result,
msg="render_template() should be called")
self.assertTrue(mock_flash_form_errors.called)
del mock_app
@patch('pagapp.public_pages.views.current_app')
@patch('pagapp.public_pages.views.render_template')
@patch('pagapp.public_pages.views.flash_form_errors')
@patch('pagapp.public_pages.views.request')
def test_login_not_post_validate(self, mock_request, mock_flash_form_errors,
mock_render_template, mock_app):
"""Test for login() function.
Test case:
If request.method not equals with 'POST' and login form does not
validate - flash_form_errors() should be called. Also function
should return render_template() call result.
"""
path_to_users = 'pagapp.public_pages.views.Users'
path_to_configuration = 'pagapp.public_pages.views.Configuration'
path_to_login_form = 'pagapp.public_pages.views.LoginForm'
path_to_current_user = 'pagapp.public_pages.views.current_user'
with patch(path_to_users) as mock_users, \
patch(path_to_configuration) as mock_configuration, \
patch(path_to_login_form) as mock_login_form, \
patch(path_to_current_user) as mock_user:
mock_filter_by_result = MagicMock()
mock_first_result = MagicMock()
mock_filter_by_result.first.return_value = 'test'
mock_first_result.gallery_title = 'test'
mock_users.query.filter_by.return_value = mock_filter_by_result
mock_configuration.query.first.return_value = mock_first_result
mock_user.is_authenticated.return_value = False
mock_request.method = 'GET'
mock_login_form.return_value.validate.return_value = False
mock_login_form.return_value.login.data = 'test'
render_template_result = 'render_template result'
mock_render_template.return_value = render_template_result
self.assertEqual(login(), render_template_result,
msg="render_template() should be called")
self.assertTrue(mock_flash_form_errors.called)
del mock_app
@patch('pagapp.public_pages.views.current_app')
@patch('pagapp.public_pages.views.abort')
@patch('pagapp.public_pages.views.render_template')
@patch('pagapp.public_pages.views.flash_form_errors')
@patch('pagapp.public_pages.views.request')
def test_login_no_template(self, mock_request, mock_flash_form_errors,
mock_render_template, mock_abort, mock_app):
"""Test for login() function.
Test case:
If request.method not equals with 'POST' - flash_form_errors() should
be called. Also if HTML template not exists abort(404) should be
called.
"""
path_to_users = 'pagapp.public_pages.views.Users'
path_to_configuration = 'pagapp.public_pages.views.Configuration'
path_to_login_form = 'pagapp.public_pages.views.LoginForm'
path_to_current_user = 'pagapp.public_pages.views.current_user'
with patch(path_to_users) as mock_users, \
patch(path_to_login_form) as mock_login_form, \
patch(path_to_configuration) as mock_configuration, \
patch(path_to_current_user) as mock_user:
mock_filter_by_result = MagicMock()
mock_first_result = MagicMock()
mock_filter_by_result.first.return_value = 'test'
mock_first_result.gallery_title = 'test'
mock_users.query.filter_by.return_value = mock_filter_by_result
mock_configuration.query.first.return_value = mock_first_result
mock_user.is_authenticated.return_value = False
mock_request.method = 'GET'
mock_login_form.return_value.validate.return_value = False
mock_login_form.return_value.login.data = 'test'
mock_render_template.side_effect = TemplateNotFound(name='test')
login()
self.assertTrue(mock_abort.called, msg="abort() should be called!")
self.assertTrue(mock_flash_form_errors.called)
del mock_app
@patch('pagapp.public_pages.views.redirect')
@patch('pagapp.public_pages.views.url_for')
@patch('pagapp.public_pages.views.current_user')
def test_login_user_already_authenticated(self, mock_user, mock_url_for,
mock_redirect):
"""Test for login() function.
Test case:
If user already logged in - user should be redirected to
the admin's panel.
"""
mock_user.is_authenticated.return_value = True
login()
self.assertTrue(mock_url_for.called)
self.assertTrue(mock_redirect.called)
mock_url_for.assert_called_with('admin_panel.panel')
@patch('pagapp.public_pages.views.login_user')
@patch('pagapp.public_pages.views.current_app')
@patch('pagapp.public_pages.views.request')
def test_login_open_request(self, mock_request, mock_app, mock_login_user):
"""Test for login() function.
Test case:
If someone passed some extra arguments to the request - function
should call abort() immediately.
"""
mock_filter_by_result = MagicMock()
mock_filter_by_result.first.return_value = 'test'
path_to_users = 'pagapp.public_pages.views.Users'
path_to_current_user = 'pagapp.public_pages.views.current_user'
path_to_login_form = 'pagapp.public_pages.views.LoginForm'
path_to_abort = 'pagapp.public_pages.views.abort'
with patch(path_to_current_user) as mock_user, \
patch(path_to_login_form) as mock_login_form, \
patch(path_to_users) as mock_users, \
patch(path_to_abort) as mock_abort:
mock_request.method = 'POST'
mock_login_form.return_value.validate.return_value = True
mock_login_form.return_value.login.data = 'test'
mock_user.is_authenticated.return_value = False
mock_users.query.filter_by.return_value = mock_filter_by_result
mock_request.args.get.return_value = 'test'
login()
self.assertTrue(mock_abort.called)
mock_abort.assert_called_with(400)
del mock_app
del mock_login_user
if __name__ == '__main__':
unittest.main()
| 46.317719 | 80 | 0.671225 | 2,802 | 22,742 | 5.067095 | 0.04818 | 0.071278 | 0.108959 | 0.141006 | 0.859769 | 0.831666 | 0.791943 | 0.763629 | 0.739048 | 0.732427 | 0 | 0.000936 | 0.248395 | 22,742 | 490 | 81 | 46.412245 | 0.829696 | 0.098452 | 0 | 0.716763 | 0 | 0 | 0.201345 | 0.16481 | 0 | 0 | 0 | 0 | 0.080925 | 1 | 0.040462 | false | 0 | 0.014451 | 0 | 0.063584 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f200cf95f412006113ac19653001c2d45e1f1494 | 14,930 | py | Python | comcenter/comcenter/controllers/util/exportexcel/risktoexcel.py | tongpa/bantak_program | 66edfe225e8018f65c9c5a6cd7745c17ba557bd5 | [
"Apache-2.0"
] | null | null | null | comcenter/comcenter/controllers/util/exportexcel/risktoexcel.py | tongpa/bantak_program | 66edfe225e8018f65c9c5a6cd7745c17ba557bd5 | [
"Apache-2.0"
] | null | null | null | comcenter/comcenter/controllers/util/exportexcel/risktoexcel.py | tongpa/bantak_program | 66edfe225e8018f65c9c5a6cd7745c17ba557bd5 | [
"Apache-2.0"
] | null | null | null | # coding=utf8
from tempfile import TemporaryFile,gettempdir;
import os as _os;
from xlwt import Workbook,XFStyle,Alignment, Font,Borders,Pattern;
from datetime import datetime,date,time,timedelta;
import calendar;
class RiskToExcel(object):
def __init__(self):
pass;
def exportReport1ToExcel(self,objectProject):
book = Workbook();
sheet1 = book.add_sheet('Sheet 1');
sheet1.col(1).width = 256*20;
sheet1.col(2).width = 256*80;
sheet1.col(3).width = 256*10;
sheet1.col(4).width = 256*10;
sheet1.col(5).width = 256*20;
sheet1.col(6).width = 256*20;
borders = Borders()
borders.left = Borders.THIN
borders.right = Borders.THIN
borders.top = Borders.THIN
borders.bottom = Borders.THIN
pattern = Pattern();
pattern.pattern = Pattern.SOLID_PATTERN
pattern.pattern_fore_colour = 23
wrap = Alignment();
wrap.wrap = 1;
wrap.vert = Alignment.VERT_TOP
alignHeader = Alignment();
alignHeader.horz = Alignment.HORZ_CENTER;
alignTop = Alignment();
alignTop.vert = Alignment.VERT_TOP
fnt = Font()
fnt.name = 'Arial'
fnt.colour_index = 4
fnt.bold = True
styleWrap = XFStyle();
styleWrap.alignment = wrap;
styleHead = XFStyle();
styleHead.font = fnt;
styleHead.borders = borders;
styleHead.pattern = pattern;
styleHead.alignment = alignHeader;
styleRowDetail = XFStyle();
styleRowDetail.borders = borders;
styleRowDetail.alignment = alignTop;
styleDate = XFStyle()
styleDate.num_format_str = 'DD-MM-YYYY' ; #'D-MMM-YY';
styleDate.borders = borders;
styleDate.alignment = alignTop;
StyleRowDetailWrap = styleRowDetail ;
StyleRowDetailWrap.alignment = wrap;
if( objectProject):
i=0;
row1 = sheet1.row(i) ;
row1.write(0, ('ลำดับที่').decode('UTF8') ,styleHead);
#sheet1.write_merge(i, i, 1, 2, ('รายละเอียด').decode('UTF8') );
row1.write(1, ('เลขความเสี่ยง').decode('UTF8'),styleHead );
row1.write(2, ('อุบัติการณ์/ภาวะไม่พึงประสงค์').decode('UTF8'),styleHead);
row1.write(3, ('วันที่รายงาน').decode('UTF8'),styleHead );
row1.write(4, ('ความรุนแรง').decode('UTF8'),styleHead );
row1.write(5, ('ด้าน/โปรแกรม').decode('UTF8') ,styleHead);
row1.write(6, ('หน่วยที่รายงาน').decode('UTF8') ,styleHead);
i=i+1;
for value in objectProject:
row1 = sheet1.row(i) ;
row1.write(0, value.get('row') ,styleRowDetail );
row1.write(1, str(value.get('risk_id')).decode('UTF8'),styleRowDetail );
row1.write(2, value.get('detail').decode('UTF8'),StyleRowDetailWrap );
row1.write(3, value.get('report_date') ,styleDate );
row1.write(4, value.get('level').decode('UTF8') ,styleRowDetail );
row1.write(5, value.get('pro').decode('UTF8') ,styleRowDetail );
row1.write(6, value.get('reporter').decode('UTF8') ,styleRowDetail );
i=i+1;
row2 = sheet1.row(i) ;
row2.write(2, ('รายละเอียด').decode('UTF8'),styleHead);
row2.write(3, ('หน่วยที่ตอบ').decode('UTF8'),styleHead );
row2.write(4, ('ระยะเวลาตอบ').decode('UTF8'),styleHead );
i=i+1;
for resp in value.get('responsible'):
row2 = sheet1.row(i) ;
row2.write(2, str(resp.get('detail')).decode('UTF8'),StyleRowDetailWrap);
row2.write(3, str(resp.get('service_name')).decode('UTF8'),styleRowDetail );
row2.write(4, str(resp.get('report_date')).decode('UTF8'),styleRowDetail );
i=i+1;
dirTempFile = gettempdir() + _os.sep + str('simpleReport1.xls');
book.save(dirTempFile);
return dirTempFile;
def exportReport1ToExcel_old(self,objectProject):
book = Workbook();
sheet1 = book.add_sheet('Sheet 1');
sheet1.col(1).width = 256*80;
sheet1.col(2).width = 256*10;
sheet1.col(3).width = 256*10;
sheet1.col(4).width = 256*20;
sheet1.col(5).width = 256*20;
borders = Borders()
borders.left = Borders.THIN
borders.right = Borders.THIN
borders.top = Borders.THIN
borders.bottom = Borders.THIN
pattern = Pattern();
pattern.pattern = Pattern.SOLID_PATTERN
pattern.pattern_fore_colour = 23
wrap = Alignment();
wrap.wrap = 1;
wrap.vert = Alignment.VERT_TOP
alignHeader = Alignment();
alignHeader.horz = Alignment.HORZ_CENTER;
alignTop = Alignment();
alignTop.vert = Alignment.VERT_TOP
fnt = Font()
fnt.name = 'Arial'
fnt.colour_index = 4
fnt.bold = True
styleWrap = XFStyle();
styleWrap.alignment = wrap;
styleHead = XFStyle();
styleHead.font = fnt;
styleHead.borders = borders;
styleHead.pattern = pattern;
styleHead.alignment = alignHeader;
styleRowDetail = XFStyle();
styleRowDetail.borders = borders;
styleRowDetail.alignment = alignTop;
styleDate = XFStyle()
styleDate.num_format_str = 'DD-MM-YYYY' ; #'D-MMM-YY';
styleDate.borders = borders;
styleDate.alignment = alignTop;
StyleRowDetailWrap = styleRowDetail ;
StyleRowDetailWrap.alignment = wrap;
if( objectProject):
i=0;
row1 = sheet1.row(i) ;
row1.write(0, ('ลำดับที่').decode('UTF8') ,styleHead);
#sheet1.write_merge(i, i, 1, 2, ('รายละเอียด').decode('UTF8') );
row1.write(1, ('อุบัติการณ์/ภาวะไม่พึงประสงค์').decode('UTF8'),styleHead);
row1.write(2, ('วันที่รายงาน').decode('UTF8'),styleHead );
row1.write(3, ('ความรุนแรง').decode('UTF8'),styleHead );
row1.write(4, ('ด้าน/โปรแกรม').decode('UTF8') ,styleHead);
row1.write(5, ('หน่วยที่รายงาน').decode('UTF8') ,styleHead);
i=i+1;
for value in objectProject:
row1 = sheet1.row(i) ;
row1.write(0, value.get('row') ,styleRowDetail );
row1.write(1, value.get('detail').decode('UTF8'),StyleRowDetailWrap );
row1.write(2, value.get('report_date') ,styleDate );
row1.write(3, value.get('level').decode('UTF8') ,styleRowDetail );
row1.write(4, value.get('pro').decode('UTF8') ,styleRowDetail );
row1.write(5, value.get('reporter').decode('UTF8') ,styleRowDetail );
i=i+1;
dirTempFile = gettempdir() + _os.sep + str('simpleReport1.xls');
book.save(dirTempFile);
return dirTempFile;
def exportReport5ToExcel(self,objectProject):
book = Workbook();
sheet1 = book.add_sheet('Sheet 1');
sheet1.col(1).width = 256*80;
sheet1.col(2).width = 256*10;
sheet1.col(3).width = 256*20;
borders = Borders()
borders.left = Borders.THIN
borders.right = Borders.THIN
borders.top = Borders.THIN
borders.bottom = Borders.THIN
pattern = Pattern();
pattern.pattern = Pattern.SOLID_PATTERN
pattern.pattern_fore_colour = 23
wrap = Alignment();
wrap.wrap = 1;
wrap.vert = Alignment.VERT_TOP
alignHeader = Alignment();
alignHeader.horz = Alignment.HORZ_CENTER;
alignTop = Alignment();
alignTop.vert = Alignment.VERT_TOP
fnt = Font()
fnt.name = 'Arial'
fnt.colour_index = 4
fnt.bold = True
styleWrap = XFStyle();
styleWrap.alignment = wrap;
styleHead = XFStyle();
styleHead.font = fnt;
styleHead.borders = borders;
styleHead.pattern = pattern;
styleHead.alignment = alignHeader;
styleRowDetail = XFStyle();
styleRowDetail.borders = borders;
styleRowDetail.alignment = alignTop;
styleDate = XFStyle()
styleDate.num_format_str = 'DD-MM-YYYY' ; #'D-MMM-YY';
styleDate.borders = borders;
styleDate.alignment = alignTop;
StyleRowDetailWrap = styleRowDetail ;
StyleRowDetailWrap.alignment = wrap;
if( objectProject):
i=0;
row1 = sheet1.row(i) ;
row1.write(0, ('risk id').decode('UTF8'),styleHead );
#sheet1.write_merge(i, i, 1, 2, ('รายละเอียด').decode('UTF8') );
row1.write(1, ('รายละเอียด').decode('UTF8'),styleHead);
row1.write(2, ('วันที่รายงาน').decode('UTF8'),styleHead );
row1.write(3, ('หน่วยที่รายงาน').decode('UTF8') ,styleHead);
i=i+1;
for value in objectProject:
row1 = sheet1.row(i) ;
row1.write(0, value.get('risk_management_id') ,styleRowDetail );
row1.write(1, value.get('risk_detail').decode('UTF8'),StyleRowDetailWrap );
#sheet1.write_merge(i, i, 1, 2, value.get('risk_detail').decode('UTF8') , StyleRowDetailWrap );
row1.write(2, value.get('report_date') ,styleDate );
row1.write(3, value.get('report').decode('UTF8') ,styleRowDetail );
i=i+1;
for sub in value.get('response') :
row1 = sheet1.row(i) ;
row1.write(0," " );
text = "(" + sub.get('risk_team').decode('UTF8') + " ) " + sub.get('result').decode('UTF8');
row1.write(1, text ,StyleRowDetailWrap );
i=i+1;
dirTempFile = gettempdir() + _os.sep + str('simpleReport5.xls');
book.save(dirTempFile);
return dirTempFile;
def exportToExcel(self,objectProject):
book = Workbook();
sheet1 = book.add_sheet('Sheet 1')
sheet1.col(1).width = 256*20;
sheet1.col(2).width = 256*80;
sheet1.col(3).width = 256*10;
sheet1.col(4).width = 256*20;
default_book_style = book.default_style
default_book_style.font.height = 20 * 36 # 36pt
fnt = Font()
fnt.name = 'Arial'
fnt.colour_index = 4
fnt.bold = True
borders = Borders()
borders.left = Borders.THIN
borders.right = Borders.THIN
borders.top = Borders.THIN
borders.bottom = Borders.THIN
pattern = Pattern();
pattern.pattern = Pattern.SOLID_PATTERN
pattern.pattern_fore_colour = 23
algn1 = Alignment();
algn1.wrap = 1;
#algn1.horz = Alignment.HORZ_CENTER
#algn1.vert = Alignment.VERT_TOP
alignHeader = Alignment();
alignHeader.horz = Alignment.HORZ_CENTER;
alignTop = Alignment();
alignTop.vert = Alignment.VERT_TOP
print "export";
if( objectProject):
i=0;
print "start" ;
styleHead = XFStyle();
styleHead.font = fnt;
styleHead.borders = borders;
styleHead.pattern = pattern;
styleHead.alignment = alignHeader;
row1 = sheet1.row(i) ;
row1.write(0, ('risk id').decode('UTF8'),styleHead );
sheet1.write_merge(i, i, 1, 2, ('รายละเอียด').decode('UTF8') ,styleHead );
# row1.write(1, ('รายละเอียด').decode('UTF8'));
row1.write(3, ('วันที่รายงาน').decode('UTF8'), styleHead );
row1.write(4, ('หน่วยที่รายงาน').decode('UTF8'), styleHead );
i=i+1;
style1 = XFStyle();
style1.alignment = algn1;
#style0 = xlwt.easyxf('font: name Times New Roman size 20, color-index black, bold on')
for value in objectProject:
row1 = sheet1.row(i) ;
styleRowDetail = XFStyle();
styleRowDetail.borders = borders;
styleRowDetail.alignment = alignTop;
StyleRowDetailWrap = styleRowDetail ;
StyleRowDetailWrap.alignment = algn1;
styleDate = XFStyle()
styleDate.num_format_str = 'DD-MM-YYYY' ; #'D-MMM-YY';
styleDate.borders = borders;
row1.write(0, value.get('risk_management_id'),styleRowDetail );
#row1.write(1, value.get('risk_detail').decode('UTF8') , style1);
sheet1.write_merge(i, i, 1, 2, value.get('risk_detail').decode('UTF8') , StyleRowDetailWrap );
row1.write(3, value.get('report_date') ,styleDate);
row1.write(4, value.get('report').decode('UTF8') ,styleRowDetail );
i=i+1;
row1 = sheet1.row(i) ;
row1.write(0," " );
row1.write(1,('หน่วยที่เกี่ยวข้อง').decode('UTF8') ,styleHead );
sheet1.write_merge(i, i, 2, 3,('รายละเอียดการตอบ').decode('UTF8') , styleHead );
i=i+1;
for sub in value.get('response') :
row1 = sheet1.row(i) ;
row1.write(0," " );
row1.write(1,sub.get('risk_team').decode('UTF8') , styleRowDetail );
sheet1.write_merge(i, i, 2, 3,sub.get('result').decode('UTF8') , StyleRowDetailWrap );
i=i+1;
dirTempFile = gettempdir() + _os.sep + str('simple.xls');
print dirTempFile;
book.save(dirTempFile);
| 37.139303 | 118 | 0.505626 | 1,511 | 14,930 | 4.988087 | 0.119126 | 0.068993 | 0.065543 | 0.039671 | 0.888683 | 0.86626 | 0.84198 | 0.815046 | 0.753881 | 0.714077 | 0 | 0.042325 | 0.363831 | 14,930 | 402 | 119 | 37.139303 | 0.743946 | 0.041728 | 0 | 0.768966 | 0 | 0 | 0.066121 | 0.004058 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.003448 | 0.017241 | null | null | 0.010345 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f2012db4e6f2034dc5c9f6fe1d793f84213d158a | 7,719 | py | Python | UtilitiesNetwork.py | AnilOsmanTur/ComplexNetworksProjects | 3cbd3f3034ad6ee3d85b37ed267bb1c1a10df6b5 | [
"MIT"
] | 9 | 2019-06-20T11:52:06.000Z | 2022-01-23T15:32:51.000Z | UtilitiesNetwork.py | AnilOsmanTur/ComplexNetworksProjects | 3cbd3f3034ad6ee3d85b37ed267bb1c1a10df6b5 | [
"MIT"
] | null | null | null | UtilitiesNetwork.py | AnilOsmanTur/ComplexNetworksProjects | 3cbd3f3034ad6ee3d85b37ed267bb1c1a10df6b5 | [
"MIT"
] | 1 | 2020-04-27T16:45:05.000Z | 2020-04-27T16:45:05.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sun Dec 3 00:51:30 2017
@author: anilosmantur
Visiualization functions
"""
import networkx as nx
import matplotlib.pyplot as plt
def drawCommunityGraph(G, pos, part, fgSize, nodeSize=35):
size = float(len(set(part.values())))
print ('Found community count: ', size)
count = 0.
plt.figure(figsize=(fgSize, fgSize))
plt.axis('off')
plt.margins(tight=True)
for com in set(part.values()) :
count = count + 1.
list_nodes = [nodes for nodes in part.keys() if part[nodes] == com]
values = [ (count / size) for nodes in list_nodes]
nodes = nx.draw_networkx_nodes(G,
pos,
list_nodes,
cmap=plt.get_cmap('jet'),
with_labels=False,
node_size = nodeSize,
node_color = values,
vmin=0.0, vmax=1.0 ) # color map magma
nodes.set_edgecolor('black')
nodes.set_linewidth(1.0)
edges = nx.draw_networkx_edges(G, pos, alpha=0.5)
edges.set_linewidth(0.5)
plt.show()
print('Printing community layout finished')
def drawCommunityGraphSave(G, pos, part, fgSize, name):
size = float(len(set(part.values())))
print ('Found community count: ', size)
count = 0.
plt.figure(figsize=(fgSize, fgSize))
plt.axis('off')
plt.margins(tight=True)
for com in set(part.values()) :
count = count + 1.
list_nodes = [nodes for nodes in part.keys() if part[nodes] == com]
values = [ (count / size) for nodes in list_nodes]
nodes = nx.draw_networkx_nodes(G,
pos,
list_nodes,
cmap=plt.get_cmap('jet'),
with_labels=False,
node_size = 35,
node_color = values,
vmin=0.0, vmax=1.0 ) # color map magma
nodes.set_edgecolor('black')
nodes.set_linewidth(1.0)
edges = nx.draw_networkx_edges(G, pos, alpha=0.5)
edges.set_linewidth(0.5)
plt.savefig('graphs/' + name +'_net_communities.png')
print('Printing community layout finished')
def drawCentralityGraph(G, pos, cent, fgSize, nodeSize=35):
count = 0.
plt.figure(figsize=(fgSize, fgSize))
plt.axis('off')
plt.margins(tight=True)
for com in set(cent.values()) :
count = count + 1.
list_nodes = [nodes for nodes in cent.keys() if cent[nodes] == com]
values = [ (400 * com) for nodes in list_nodes]
nodes = nx.draw_networkx_nodes(G,
pos,
list_nodes,
cmap=plt.get_cmap('jet'),
with_labels=False,
node_size = values,
node_color = values,
vmin=0.0, vmax=1.0 ) # color map magma
nodes.set_edgecolor('black')
nodes.set_linewidth(1.0)
edges = nx.draw_networkx_edges(G, pos, alpha=0.5)
edges.set_linewidth(0.5)
plt.show()
print('Printing centrality layout finished')
def drawCentralityGraphSave(G, pos, cent, fgSize, name='', c_type=''):
count = 0.
plt.figure(figsize=(fgSize, fgSize))
plt.axis('off')
plt.margins(tight=True)
for com in set(cent.values()) :
count = count + 1.
list_nodes = [nodes for nodes in cent.keys() if cent[nodes] == com]
values = [ (400 * com) for nodes in list_nodes]
nodes = nx.draw_networkx_nodes(G,
pos,
list_nodes,
cmap=plt.get_cmap('jet'),
with_labels=False,
node_size = values,
node_color = values,
vmin=0.0, vmax=1.0 ) # color map magma
nodes.set_edgecolor('black')
nodes.set_linewidth(1.0)
edges = nx.draw_networkx_edges(G, pos, alpha=0.5)
edges.set_linewidth(0.5)
plt.savefig('graphs/' + name +'_net_'+ c_type + '.png')
print('Printing centrality layout finished')
def nxDrawCommunityGraph(G, pos, coms, fgSize, nodeSize=35):
size = len(coms)
print ('community count: ', size)
count = 0.
plt.figure(figsize=(fgSize, fgSize))
plt.axis('off')
plt.margins(tight=True)
for com in coms :
count = count + 1.
list_nodes = [nodes for nodes in com]
values = [ (count / size) for nodes in list_nodes]
nodes = nx.draw_networkx_nodes(G,
pos,
list_nodes,
cmap=plt.get_cmap('jet'),
with_labels=False,
node_size = nodeSize,
node_color = values,
vmin=0.0, vmax=1.0 ) # color map magma
nodes.set_edgecolor('black')
nodes.set_linewidth(1.0)
edges = nx.draw_networkx_edges(G, pos, alpha=0.5)
edges.set_linewidth(0.5)
plt.show()
print('Printing community layout finished')
def drawGraph(G, pos, fgSize, nodeSize=35):
plt.figure(figsize=(fgSize, fgSize))
plt.axis('off')
plt.margins(tight=True)
nodes = nx.draw_networkx_nodes(G, pos, node_size=nodeSize, node_color='red')
nodes.set_edgecolor('black')
nodes.set_linewidth(1.0)
edges = nx.draw_networkx_edges(G, pos, edge_color='blue')
edges.set_linewidth(0.5)
plt.show()
print('Printing layout finished')
def drawGraphSave(G, pos, fgSize, name=''):
plt.figure(figsize=(fgSize, fgSize))
plt.axis('off')
plt.margins(tight=True)
nodes = nx.draw_networkx_nodes(G, pos, node_size=35, node_color='red')
nodes.set_edgecolor('black')
nodes.set_linewidth(1.0)
edges = nx.draw_networkx_edges(G, pos, edge_color='blue')
edges.set_linewidth(0.5)
plt.savefig('graphs/' + name +'_net.png')
print('Printing layout finished')
def drawEgoGraph(G, pos, egoNode, fgSize):
plt.figure(figsize=(fgSize, fgSize))
plt.axis('off')
plt.margins(tight=True)
nodes = nx.draw_networkx_nodes(G, pos, node_size=35, node_color='green')
nodes.set_edgecolor('black')
nodes.set_linewidth(1.0)
edges = nx.draw_networkx_edges(G, pos, edge_color='blue')
edges.set_linewidth(0.5)
plt.show()
print('Printing Ego layout finished')
def centralityPlot(cent, fgSize):
plt.figure(figsize=(fgSize, fgSize))
plt.margins(tight=True)
cent = sorted(cent.items())
values = [ c for (node, c) in cent]
nodes = [ node for (node, c) in cent]
plt.plot(nodes, values)
plt.show()
print('Printing Centrality Plot Finished')
def centralityPlotSave(cent, fgSize, name='', c_type=''):
plt.figure(figsize=(fgSize, fgSize))
plt.margins(tight=True)
cent = sorted(cent.items())
values = [ c for (node, c) in cent]
nodes = [ node for (node, c) in cent]
plt.plot(nodes, values)
plt.savefig('graphs/' + name +'_net_plot_'+ c_type + '.png')
print('Printing Centrality Plot Finished')
| 37.289855 | 80 | 0.532064 | 915 | 7,719 | 4.36612 | 0.123497 | 0.02403 | 0.05607 | 0.055069 | 0.869587 | 0.828035 | 0.79174 | 0.788736 | 0.788736 | 0.779975 | 0 | 0.020929 | 0.350045 | 7,719 | 206 | 81 | 37.470874 | 0.775364 | 0.026947 | 0 | 0.856322 | 0 | 0 | 0.07443 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057471 | false | 0 | 0.011494 | 0 | 0.068966 | 0.074713 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
485e5bbc7e09c0373b2c146fac58b83e47c10b29 | 374 | py | Python | python/python_work4/if2.py | lucycore/python | c1c49e52392615851c0efec886d0e947b609681c | [
"Apache-2.0"
] | null | null | null | python/python_work4/if2.py | lucycore/python | c1c49e52392615851c0efec886d0e947b609681c | [
"Apache-2.0"
] | null | null | null | python/python_work4/if2.py | lucycore/python | c1c49e52392615851c0efec886d0e947b609681c | [
"Apache-2.0"
] | null | null | null | hi = "buff"
if hi == 'buff':
print("yes!")
else:
print("no!")
print("---------------------")
hi = "buff"
if hi != 'buff':
print("yes!")
else:
print("no!")
print("---------------------")
hi = 60
if hi == 60:
print("=60,yes")
if hi <= 70:
print("<=70,yes")
if hi < 70:
print("<70,yes")
if hi > 30:
print(">30,yes")
if hi >= 30:
print(">=30,yes")
| 9.842105 | 30 | 0.427807 | 54 | 374 | 2.962963 | 0.185185 | 0.175 | 0.175 | 0.125 | 0.90625 | 0.90625 | 0.90625 | 0.73125 | 0.73125 | 0.4875 | 0 | 0.07483 | 0.213904 | 374 | 37 | 31 | 10.108108 | 0.469388 | 0 | 0 | 0.434783 | 0 | 0 | 0.291444 | 0.112299 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.478261 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
486aa030022370f22b2704847c703280ae0d8e98 | 8,321 | py | Python | _states/danos.py | ppeereb1/kinetic | 060bf326e1659596360fb240972d635c45611e24 | [
"Apache-2.0"
] | null | null | null | _states/danos.py | ppeereb1/kinetic | 060bf326e1659596360fb240972d635c45611e24 | [
"Apache-2.0"
] | null | null | null | _states/danos.py | ppeereb1/kinetic | 060bf326e1659596360fb240972d635c45611e24 | [
"Apache-2.0"
] | null | null | null | ### Danos State Module
import json
from urllib.parse import quote
__virtualname__ = 'danos'
def __virtual__():
return __virtualname__
def set_resourcegroup(name,
type,
description,
values,
username,
password,
host,
**kwargs):
groupmap = {"address-group": "address",
"dscp-group": "dscp",
"port-group": "port",
"protocol-group": "protocol"}
ret = {"name": name, "result": False, "changes": {}, "comment": ""}
### If test isn't specified, assume test=false
if "test" not in kwargs:
kwargs["test"] = __opts__.get("test", False)
### If test is specified:
### Get current description and members and compare to target description
### and members. If the same, return result=true and no changes. If not
### the same, return changes dict.
if kwargs["test"]:
current_description = __salt__["danos.get_configuration"](host, username, password, '/resources/group/'+type+'/'+name+'/description', **kwargs)
current_members = __salt__["danos.get_configuration"](host, username, password, '/resources/group/'+type+'/'+name+'/'+groupmap[type], **kwargs)
memberlist = []
if "children" in current_members:
for member in json.loads(current_members)["children"]:
memberlist.append(member["name"])
descr = ""
if "children" in current_description:
descr = json.loads(current_description)["children"][0]["name"]
if (descr == description and set(memberlist) == set(values)):
ret["result"] = True
ret["comment"] = "The "+name+" resource group is up-to-date"
else:
ret["result"] = None
ret["comment"] = "The "+name+" resource group has required changes"
ret["changes"] = {"group":name,
"current description":descr,
"target description":description,
"current members":set(memberlist),
"target members":set(values)}
else:
current_description = __salt__["danos.get_configuration"](host, username, password, '/resources/group/'+type+'/'+name+'/description', **kwargs)
current_members = __salt__["danos.get_configuration"](host, username, password, '/resources/group/'+type+'/'+name+'/'+groupmap[type], **kwargs)
memberlist = []
if "children" in current_members:
for member in json.loads(current_members)["children"]:
memberlist.append(member["name"])
descr = ""
if "children" in current_description:
descr = json.loads(current_description)["children"][0]["name"]
if (descr == description and set(memberlist) == set(values)):
### no changes needed
ret["result"] = True
ret["comment"] = "The "+name+" resource group is up-to-date"
else:
### Changes are needed
### Create session to be used throughout
location = __salt__["danos.make_session"](host, username, password)
__salt__["danos.delete_configuration"](host, username, password, '/resources/group/'+type+'/'+name, location, **kwargs)
__salt__["danos.set_configuration"](host, username, password, '/resources/group/'+type+'/'+name+'/description/'+quote(description), location, **kwargs)
for value in values:
__salt__["danos.set_configuration"](host, username, password, '/resources/group/'+type+'/'+name+'/'+groupmap[type]+'/'+value, location, **kwargs)
__salt__["danos.commit_configuration"](host, username, password, location)
__salt__["danos.delete_session"](host, username, password, location)
ret["result"] = True
ret["comment"] = "The "+name+" resource group has been updated"
ret["changes"] = {"group":name,
"old description":descr,
"new description":description,
"old members":set(memberlist),
"new members":set(values)}
return ret
def set_statichostmapping(name,
address,
username,
password,
host,
aliases=None,
**kwargs):
ret = {"name": name, "result": False, "changes": {}, "comment": ""}
### If test isn't specified, assume test=false
if "test" not in kwargs:
kwargs["test"] = __opts__.get("test", False)
### If test is specified:
### Get current description and members and compare to target description
### and members. If the same, return result=true and no changes. If not
### the same, return changes dict.
if kwargs["test"]:
current_address = __salt__["danos.get_configuration"](host, username, password, '/system/static-host-mapping/host-name/'+name+'/inet', **kwargs)
current_aliases = __salt__["danos.get_configuration"](host, username, password, '/system/static-host-mapping/host-name/'+name+'/alias', **kwargs)
aliaslist = []
if "children" in current_aliases:
for alias in json.loads(current_aliases)["children"]:
aliaslist.append(alias["name"])
addr = ""
if "children" in current_address:
addr = json.loads(current_address)["children"][0]["name"]
if (addr == address and set(aliaslist) == set(aliases)):
ret["result"] = True
ret["comment"] = "The "+name+" static-host-mapping is up-to-date"
else:
ret["result"] = None
ret["comment"] = "The "+name+" static-host-mapping has required changes"
ret["changes"] = {"hostname":name,
"current address":addr,
"target address":address,
"current aliases":set(aliaslist),
"target aliases":set(aliases)}
else:
current_address = __salt__["danos.get_configuration"](host, username, password, '/system/static-host-mapping/host-name/'+name+'/inet', **kwargs)
current_aliases = __salt__["danos.get_configuration"](host, username, password, '/system/static-host-mapping/host-name/'+name+'/alias', **kwargs)
aliaslist = []
if "children" in current_aliases:
for alias in json.loads(current_aliases)["children"]:
aliaslist.append(alias["name"])
addr = ""
if "children" in current_address:
addr = json.loads(current_address)["children"][0]["name"]
if (addr == address and set(aliaslist) == set(aliases)):
### no changes needed
ret["result"] = True
ret["comment"] = "The "+name+" static-host-mapping is up-to-date"
else:
### Changes are needed
### Create session to be used throughout
location = __salt__["danos.make_session"](host, username, password)
__salt__["danos.delete_configuration"](host, username, password, '/system/static-host-mapping/host-name/'+name, location, **kwargs)
__salt__["danos.set_configuration"](host, username, password, '/system/static-host-mapping/host-name/'+name+'/inet/'+address, location, **kwargs)
for alias in aliases:
__salt__["danos.set_configuration"](host, username, password, '/system/static-host-mapping/host-name/'+name+'/alias/'+alias, location, **kwargs)
__salt__["danos.commit_configuration"](host, username, password, location)
__salt__["danos.delete_session"](host, username, password, location)
ret["result"] = True
ret["comment"] = "The "+name+" static-host-mapping has been updated"
ret["changes"] = {"hostname":name,
"current address":addr,
"target address":address,
"current aliases":set(aliaslist),
"target aliases":set(aliases)}
return ret
| 47.011299 | 163 | 0.566158 | 830 | 8,321 | 5.5 | 0.120482 | 0.077108 | 0.087623 | 0.115663 | 0.861117 | 0.845345 | 0.845345 | 0.841183 | 0.832202 | 0.826944 | 0 | 0.000681 | 0.293595 | 8,321 | 176 | 164 | 47.278409 | 0.775944 | 0.076673 | 0 | 0.723077 | 0 | 0 | 0.253571 | 0.084655 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023077 | false | 0.169231 | 0.015385 | 0.007692 | 0.061538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
487b1d6a3642d2b4d5a07317fc0dd8f4f8549146 | 1,000 | py | Python | wwag/decorators.py | zhoutong/wwag | 41afb58589781e5c168f76f918cf00a93b35a78c | [
"MIT"
] | null | null | null | wwag/decorators.py | zhoutong/wwag | 41afb58589781e5c168f76f918cf00a93b35a78c | [
"MIT"
] | null | null | null | wwag/decorators.py | zhoutong/wwag | 41afb58589781e5c168f76f918cf00a93b35a78c | [
"MIT"
] | null | null | null | from functools import wraps
from flask import g, request, redirect, url_for
def player_login_required(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if g.get('current_player') is None:
return redirect(url_for('users_login', error="You must sign in as a Player to access this page."))
return f(*args, **kwargs)
return decorated_function
def viewer_login_required(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if g.get('current_viewer') is None:
return redirect(url_for('users_login', error="You must sign in as a Viewer to access this page."))
return f(*args, **kwargs)
return decorated_function
def staff_login_required(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if g.get('current_player') is None or g.current_player['Type'] != "S":
return redirect(url_for('users_login', error="You must sign in as a Player with staff permissions to access this page."))
return f(*args, **kwargs)
return decorated_function
| 37.037037 | 127 | 0.715 | 153 | 1,000 | 4.522876 | 0.287582 | 0.147399 | 0.080925 | 0.08237 | 0.806358 | 0.806358 | 0.806358 | 0.806358 | 0.806358 | 0.806358 | 0 | 0 | 0.165 | 1,000 | 26 | 128 | 38.461538 | 0.828743 | 0 | 0 | 0.521739 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0 | 0.086957 | 0 | 0.73913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
6f9ea45b5d49a9c6e979efa071707b6359ec27be | 3,989 | py | Python | operations/propfac/migrations/0004_auto_20170623_0905.py | kaizer88/emps | 2669b32c46befcf1a19390fb25013817e6b00980 | [
"MIT"
] | null | null | null | operations/propfac/migrations/0004_auto_20170623_0905.py | kaizer88/emps | 2669b32c46befcf1a19390fb25013817e6b00980 | [
"MIT"
] | null | null | null | operations/propfac/migrations/0004_auto_20170623_0905.py | kaizer88/emps | 2669b32c46befcf1a19390fb25013817e6b00980 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import django.db.models.deletion
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('propfac', '0003_auto_20170613_1019'),
]
operations = [
migrations.RenameField(
model_name='historicalpfrequisition',
old_name='quote_number',
new_name='quotation1',
),
migrations.RenameField(
model_name='pfrequisition',
old_name='quote_number',
new_name='quotation1',
),
migrations.AddField(
model_name='historicalpfrequisition',
name='amount',
field=models.FloatField(default=0, max_length=20, null=True, blank=True),
),
migrations.AddField(
model_name='historicalpfrequisition',
name='authorized',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='historicalpfrequisition',
name='authorized_by',
field=models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True),
),
migrations.AddField(
model_name='historicalpfrequisition',
name='budgeted',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='historicalpfrequisition',
name='finalized',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='historicalpfrequisition',
name='finalized_by',
field=models.ForeignKey(related_name='+', on_delete=django.db.models.deletion.DO_NOTHING, db_constraint=False, blank=True, to=settings.AUTH_USER_MODEL, null=True),
),
migrations.AddField(
model_name='historicalpfrequisition',
name='mortivation',
field=models.CharField(default=None, max_length=2000, null=True, blank=True),
),
migrations.AddField(
model_name='historicalpfrequisition',
name='quotation2',
field=models.CharField(default=None, max_length=120),
),
migrations.AddField(
model_name='pfrequisition',
name='amount',
field=models.FloatField(default=0, max_length=20, null=True, blank=True),
),
migrations.AddField(
model_name='pfrequisition',
name='authorized',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='pfrequisition',
name='authorized_by',
field=models.ForeignKey(related_name='authorizer_propfac_requisitions', blank=True, to=settings.AUTH_USER_MODEL, null=True),
),
migrations.AddField(
model_name='pfrequisition',
name='budgeted',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='pfrequisition',
name='finalized',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='pfrequisition',
name='finalized_by',
field=models.ForeignKey(related_name='finalizer_propfac_requisitions', blank=True, to=settings.AUTH_USER_MODEL, null=True),
),
migrations.AddField(
model_name='pfrequisition',
name='mortivation',
field=models.CharField(default=None, max_length=2000, null=True, blank=True),
),
migrations.AddField(
model_name='pfrequisition',
name='quotation2',
field=models.CharField(default=None, max_length=120),
),
]
| 36.935185 | 175 | 0.60742 | 360 | 3,989 | 6.544444 | 0.202778 | 0.068761 | 0.156197 | 0.183362 | 0.843803 | 0.843803 | 0.809847 | 0.809847 | 0.734295 | 0.734295 | 0 | 0.014321 | 0.282276 | 3,989 | 107 | 176 | 37.280374 | 0.808592 | 0.005264 | 0 | 0.871287 | 0 | 0 | 0.156077 | 0.073374 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.039604 | 0 | 0.069307 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6fa6f7414c912b3dc78946ebaffc1baaaaff7475 | 13,996 | py | Python | matching_performance.py | Propaler/FedMA | e235d971e192fb0e93abd4ad37ac603552b6484c | [
"MIT"
] | null | null | null | matching_performance.py | Propaler/FedMA | e235d971e192fb0e93abd4ad37ac603552b6484c | [
"MIT"
] | null | null | null | matching_performance.py | Propaler/FedMA | e235d971e192fb0e93abd4ad37ac603552b6484c | [
"MIT"
] | null | null | null | import logging
from model import *
from utils import *
from vgg import *
from vgg import matched_vgg11
def compute_model_averaging_accuracy(models, weights, train_dl, test_dl, n_classes, args):
"""An variant of fedaveraging"""
if args.model == "lenet":
avg_cnn = LeNet()
elif args.model == "vgg":
avg_cnn = vgg11()
elif args.model == "simple-cnn":
if args.dataset in ("cifar10", "cinic10","hpe-cifar10"):
avg_cnn = SimpleCNN(input_dim=(16 * 5 * 5), hidden_dims=[120, 84], output_dim=10)
elif args.dataset == "mnist" or args.dataset == 'hpe-mnist':
avg_cnn = SimpleCNNMNIST(input_dim=(16 * 4 * 4), hidden_dims=[120, 84], output_dim=10)
elif args.model == "moderate-cnn":
if args.dataset in ("cifar10", "cinic10","hpe-cifar10"):
avg_cnn = ModerateCNN()
elif args.dataset == "mnist" or args.dataset == 'hpe-mnist':
avg_cnn = ModerateCNNMNIST()
new_state_dict = {}
model_counter = 0
# handle the conv layers part which is not changing
for param_idx, (key_name, param) in enumerate(avg_cnn.state_dict().items()):
if "conv" in key_name or "features" in key_name:
if "weight" in key_name:
temp_dict = {key_name: torch.from_numpy(weights[param_idx].reshape(param.size()))}
elif "bias" in key_name:
temp_dict = {key_name: torch.from_numpy(weights[param_idx])}
elif "fc" in key_name or "classifier" in key_name:
if "weight" in key_name:
temp_dict = {key_name: torch.from_numpy(weights[param_idx].T)}
elif "bias" in key_name:
temp_dict = {key_name: torch.from_numpy(weights[param_idx])}
new_state_dict.update(temp_dict)
avg_cnn.load_state_dict(new_state_dict)
# switch to eval mode:
avg_cnn.eval()
##
correct, total = 0, 0
for batch_idx, (x, target) in enumerate(test_dl):
out_k = avg_cnn(x)
_, pred_label = torch.max(out_k, 1)
total += x.data.size()[0]
correct += (pred_label == target.data).sum().item()
logger.info("Accuracy for Fed Averaging correct: {}, total: {}".format(correct, total))
def compute_pdm_cnn_accuracy(models, weights, train_dl, test_dl, n_classes, assignments):
"""Note that we only handle the FC weights for now"""
# we need to figure out the FC dims first
matched_weights = weights[1:] # get rid of the dummy layer, this should be deprecated later
input_dim = matched_weights[0].shape[0] # hard coded for now, will make changes later
hidden_dims = [matched_weights[0].shape[1], matched_weights[2].shape[1]]
output_dim = matched_weights[-1].shape[0]
logger.info("Input dim: {}, hidden_dims: {}, output_dim: {}".format(input_dim, hidden_dims, output_dim))
args_n_nets = len(models)
#book_keeper = {4:0, 5:1, 6:2, 7:3, 8:4, 9:5}
unmatched_cnn_blocks = []
for model_i, model in enumerate(models):
tempt_cnn = ConvBlock()
#logger.info("Keys of layers of convblock ...")
#prilogger.infont(tempt_cnn.state_dict().keys())
new_state_dict = {}
model_counter = 0
# handle the conv layers part which is not changing
for param_idx, (key_name, param) in enumerate(tempt_cnn.state_dict().items()):
if "conv" in key_name:
temp_dict = {key_name: models[model_i].state_dict()[key_name]}
new_state_dict.update(temp_dict)
model_counter += 1
tempt_cnn.load_state_dict(new_state_dict)
unmatched_cnn_blocks.append(tempt_cnn)
matched_state_dict = {}
matched_fcs = FCBlock(input_dim, hidden_dims, output_dim)
for param_idx, (key_name, param) in enumerate(matched_fcs.state_dict().items()):
if "weight" in key_name:
temp_dict = {key_name: torch.from_numpy(matched_weights[param_idx].T)}
elif "bias" in key_name:
temp_dict = {key_name: torch.from_numpy(matched_weights[param_idx])}
matched_state_dict.update(temp_dict)
matched_fcs.load_state_dict(matched_state_dict)
# switch to eval mode:
for model in unmatched_cnn_blocks:
model.eval()
matched_fcs.eval()
##
correct, total = 0, 0
for batch_idx, (x, target) in enumerate(test_dl):
#combined_outputs = []
outputs_aggregator = np.zeros((x.size()[0], weights[0].shape[0]), dtype=np.float32)
for model_idx in range(args_n_nets):
# at here, we need to do
# i) aligning the outputs according to the assignments of the input layer
# ii) avaraging the aligned outputs
out = unmatched_cnn_blocks[model_idx](x)
out_numpy = out.detach().numpy()
padded_out = np.zeros((out.size()[0], weights[0].shape[0]), dtype=np.float32)
padded_out[:, assignments[2][model_idx]] = out_numpy
outputs_aggregator += padded_out
#combined_outputs.append(padded_out)
outputs_aggregator /= args_n_nets # averaging step
combined_conv_block_out = torch.from_numpy(outputs_aggregator)
out_k = matched_fcs(combined_conv_block_out)
_, pred_label = torch.max(out_k, 1)
total += x.data.size()[0]
correct += (pred_label == target.data).sum().item()
logger.info("Accuracy for Neural Matching correct: {}, total: {}".format(correct, total))
def compute_pdm_vgg_accuracy(models, weights, train_dl, test_dl, n_classes, assignments):
"""Note that we only handle the FC weights for now"""
# we need to figure out the FC dims first
matched_weights = weights[1:] # get rid of the dummy layer, this should be deprecated later
input_dim = matched_weights[0].shape[0] # hard coded for now, will make changes later
hidden_dims = [matched_weights[0].shape[1], matched_weights[2].shape[1]]
output_dim = matched_weights[-1].shape[0]
logger.info("Input dim: {}, hidden_dims: {}, output_dim: {}".format(input_dim, hidden_dims, output_dim))
args_n_nets = len(models)
unmatched_cnn_blocks = []
for model_i, model in enumerate(models):
tempt_cnn = VGGConvBlocks(make_layers(cfg['A'], batch_norm=True), num_classes=10)
new_state_dict = {}
model_counter = 0
# handle the conv layers part which is not changing
for param_idx, (key_name, param) in enumerate(tempt_cnn.state_dict().items()):
if "classifier" not in key_name:
temp_dict = {key_name: models[model_i].state_dict()[key_name]}
new_state_dict.update(temp_dict)
model_counter += 1
tempt_cnn.load_state_dict(new_state_dict)
unmatched_cnn_blocks.append(tempt_cnn)
matched_state_dict = {}
matched_fcs = FCBlockVGG(input_dim, hidden_dims, output_dim)
for param_idx, (key_name, param) in enumerate(matched_fcs.state_dict().items()):
if "weight" in key_name:
temp_dict = {key_name: torch.from_numpy(matched_weights[param_idx].T)}
elif "bias" in key_name:
temp_dict = {key_name: torch.from_numpy(matched_weights[param_idx])}
matched_state_dict.update(temp_dict)
matched_fcs.load_state_dict(matched_state_dict)
# switch to eval mode:
for model in unmatched_cnn_blocks:
model.eval()
matched_fcs.eval()
##
correct, total = 0, 0
for batch_idx, (x, target) in enumerate(test_dl):
#combined_outputs = []
outputs_aggregator = np.zeros((x.size()[0], weights[0].shape[0]), dtype=np.float32)
for model_idx in range(args_n_nets):
# at here, we need to do
# i) aligning the outputs according to the assignments of the input layer
# ii) avaraging the aligned outputs
out = unmatched_cnn_blocks[model_idx](x)
out_numpy = out.detach().numpy()
padded_out = np.zeros((out.size()[0], weights[0].shape[0]), dtype=np.float32)
padded_out[:, assignments[2][model_idx]] = out_numpy
outputs_aggregator += padded_out
#combined_outputs.append(padded_out)
#print(combined_outputs)
outputs_aggregator /= args_n_nets # averaging step
combined_conv_block_out = torch.from_numpy(outputs_aggregator)
out_k = matched_fcs(combined_conv_block_out)
_, pred_label = torch.max(out_k, 1)
total += x.data.size()[0]
correct += (pred_label == target.data).sum().item()
logger.info("Accuracy for Neural Matching correct: {}, total: {}".format(correct, total))
def compute_full_cnn_accuracy(models, weights, train_dl, test_dl, n_classes, device, args):
"""Note that we only handle the FC weights for now"""
# we need to figure out the FC dims first
#LeNetContainer
# def __init__(self, num_filters, kernel_size, input_dim, hidden_dims, output_dim=10)
# this should be safe to be hard-coded since most of the modern image classification dataset are in RGB format
#args_n_nets = len(models)
if args.model == "lenet":
num_filters = [weights[0].shape[0], weights[2].shape[0]]
kernel_size = 5
input_dim = weights[4].shape[0]
hidden_dims = [weights[4].shape[1]]
output_dim = weights[-1].shape[0]
logger.info("Num filters: {}, Input dim: {}, hidden_dims: {}, output_dim: {}".format(num_filters, input_dim, hidden_dims, output_dim))
matched_cnn = LeNetContainer(
num_filters=num_filters,
kernel_size=kernel_size,
input_dim=input_dim,
hidden_dims=hidden_dims,
output_dim=output_dim)
elif args.model == "vgg":
matched_shapes = [w.shape for w in weights]
matched_cnn = matched_vgg11(matched_shapes=matched_shapes)
elif args.model == "simple-cnn":
# input_channel, num_filters, kernel_size, input_dim, hidden_dims, output_dim=10):
# [(9, 75), (9,), (19, 225), (19,), (475, 123), (123,), (123, 87), (87,), (87, 10), (10,)]
if args.dataset in ("cifar10", "cinic10","hpe-cifar10"):
input_channel = 3
elif args.dataset == "mnist" or args.dataset == 'hpe-mnist':
input_channel = 1
num_filters = [weights[0].shape[0], weights[2].shape[0]]
input_dim = weights[4].shape[0]
hidden_dims = [weights[4].shape[1], weights[6].shape[1]]
matched_cnn = SimpleCNNContainer(input_channel=input_channel,
num_filters=num_filters,
kernel_size=5,
input_dim=input_dim,
hidden_dims=hidden_dims,
output_dim=10)
elif args.model == "moderate-cnn":
#[(35, 27), (35,), (68, 315), (68,), (132, 612), (132,), (132, 1188), (132,),
#(260, 1188), (260,), (260, 2340), (260,),
#(4160, 1025), (1025,), (1025, 515), (515,), (515, 10), (10,)]
num_filters = [weights[0].shape[0], weights[2].shape[0], weights[4].shape[0], weights[6].shape[0], weights[8].shape[0], weights[10].shape[0]]
input_dim = weights[12].shape[0]
hidden_dims = [weights[12].shape[1], weights[14].shape[1]]
if args.dataset in ("cifar10", "cinic10","hpe-cifar10"):
matched_cnn = ModerateCNNContainer(3,
num_filters,
kernel_size=3,
input_dim=input_dim,
hidden_dims=hidden_dims,
output_dim=10)
elif args.dataset == "mnist" or args.dataset == 'hpe-mnist':
matched_cnn = ModerateCNNContainer(1,
num_filters,
kernel_size=3,
input_dim=input_dim,
hidden_dims=hidden_dims,
output_dim=10)
#logger.info("Keys of layers of convblock ...")
new_state_dict = {}
model_counter = 0
# handle the conv layers part which is not changing
for param_idx, (key_name, param) in enumerate(matched_cnn.state_dict().items()):
#print("&"*30)
#print("Key: {}, Weight Shape: {}, Matched weight shape: {}".format(key_name, param.size(), weights[param_idx].shape))
#print("&"*30)
if "conv" in key_name or "features" in key_name:
if "weight" in key_name:
temp_dict = {key_name: torch.from_numpy(weights[param_idx].reshape(param.size()))}
elif "bias" in key_name:
temp_dict = {key_name: torch.from_numpy(weights[param_idx])}
elif "fc" in key_name or "classifier" in key_name:
if "weight" in key_name:
temp_dict = {key_name: torch.from_numpy(weights[param_idx].T)}
elif "bias" in key_name:
temp_dict = {key_name: torch.from_numpy(weights[param_idx])}
new_state_dict.update(temp_dict)
matched_cnn.load_state_dict(new_state_dict)
matched_cnn.to(device)
matched_cnn.eval()
##
correct, total = 0, 0
for batch_idx, (x, target) in enumerate(test_dl):
x, target = x.to(device), target.to(device)
out_k = matched_cnn(x)
_, pred_label = torch.max(out_k, 1)
total += x.data.size()[0]
correct += (pred_label == target.data).sum().item()
logger.info("Accuracy for Neural Matching correct: {}, total: {}".format(correct, total))
return matched_cnn
| 45.738562 | 149 | 0.598385 | 1,831 | 13,996 | 4.328782 | 0.123976 | 0.039743 | 0.024981 | 0.022962 | 0.835983 | 0.821474 | 0.807974 | 0.791824 | 0.763184 | 0.742998 | 0 | 0.03049 | 0.282938 | 13,996 | 305 | 150 | 45.888525 | 0.759267 | 0.1504 | 0 | 0.754717 | 0 | 0 | 0.058903 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018868 | false | 0 | 0.023585 | 0 | 0.04717 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6fd2015fa2e811142668a0908bc0629ff4bb4708 | 69 | py | Python | hms_tz/hms_tz/doctype/healthcare.py | av-dev2/hms_tz | a36dbe8bfacf6a770913b1bfa000d43edd2cd87a | [
"MIT"
] | 5 | 2021-04-20T06:11:25.000Z | 2021-11-18T15:37:25.000Z | hms_tz/hms_tz/doctype/healthcare.py | av-dev2/hms_tz | a36dbe8bfacf6a770913b1bfa000d43edd2cd87a | [
"MIT"
] | 90 | 2021-04-05T13:36:34.000Z | 2022-03-31T07:26:25.000Z | hms_tz/hms_tz/doctype/healthcare.py | av-dev2/hms_tz | a36dbe8bfacf6a770913b1bfa000d43edd2cd87a | [
"MIT"
] | 10 | 2021-03-26T06:43:20.000Z | 2022-02-18T06:36:58.000Z | from __future__ import unicode_literals
def get_data():
return []
| 11.5 | 39 | 0.768116 | 9 | 69 | 5.222222 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 69 | 5 | 40 | 13.8 | 0.810345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
b50edb9de5bee6ca84f0b1f8593274451204a90a | 2,075 | py | Python | errors/team.py | CloudCIX/membership | a7a62918c7d7c65dd1bf2068431dbf2ec2573e4b | [
"Apache-2.0"
] | null | null | null | errors/team.py | CloudCIX/membership | a7a62918c7d7c65dd1bf2068431dbf2ec2573e4b | [
"Apache-2.0"
] | null | null | null | errors/team.py | CloudCIX/membership | a7a62918c7d7c65dd1bf2068431dbf2ec2573e4b | [
"Apache-2.0"
] | null | null | null | """
Error Codes for all of the Methods in the Team Service
"""
# Create
membership_team_create_101 = 'The "name" parameter is invalid. "name" is required and must be a string.'
membership_team_create_102 = 'The "name" parameter is invalid. "name" cannot be longer than 50 characters.'
membership_team_create_103 = (
'The "users" parameter is invalid. "users" must be an array of integers representing User ids.'
)
membership_team_create_104 = (
'The "users" parameter is invalid. One or more of the sent ids do not correspond to Users in your Member.'
)
membership_team_create_201 = 'You do not have permission to make this request. Your Member must be self-managed.'
membership_team_create_202 = 'You do not have permission to make this request. You must be an administrator.'
# Read
membership_team_read_001 = 'The "pk" path parameter is invalid. "pk" must belong to a valid Team record.'
# Update
membership_team_update_001 = 'The "pk" path parameter is invalid. "pk" must belong to a valid Team record.'
membership_team_update_101 = 'The "name" parameter is invalid. "name" is required and must be a string.'
membership_team_update_102 = 'The "name" parameter is invalid. "name" cannot be longer than 50 characters.'
membership_team_update_103 = (
'The "users" parameter is invalid. "users" must be an array of integers representing User ids.'
)
membership_team_update_104 = (
'The "users" parameter is invalid. One or more of the sent ids do not correspond to Users in your Member.'
)
membership_team_update_201 = 'You do not have permission to make this request. Your Member must be self-managed.'
membership_team_update_202 = 'You do not have permission to make this request. You must be an administrator.'
# Delete
membership_team_delete_001 = 'The "pk" path parameter is invalid. "pk" must belong to a valid Team record.'
membership_team_delete_201 = 'You do not have permission to make this request. Your Member must be self-managed.'
membership_team_delete_202 = 'You do not have permission to make this request. You must be an administrator.'
| 56.081081 | 113 | 0.771084 | 332 | 2,075 | 4.665663 | 0.204819 | 0.153648 | 0.127824 | 0.046482 | 0.856682 | 0.856682 | 0.856682 | 0.856682 | 0.856682 | 0.856682 | 0 | 0.031501 | 0.158554 | 2,075 | 36 | 114 | 57.638889 | 0.85567 | 0.039036 | 0 | 0.16 | 0 | 0.16 | 0.706001 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d2154e7aab9dea5ab08a951a3cb28ec8beafd6e6 | 14,881 | py | Python | lib/th_SMPLX.py | MoyGcc/IPNet | dc6c48aa528342d0269cde229d5e1f3a158186a6 | [
"Unlicense"
] | null | null | null | lib/th_SMPLX.py | MoyGcc/IPNet | dc6c48aa528342d0269cde229d5e1f3a158186a6 | [
"Unlicense"
] | null | null | null | lib/th_SMPLX.py | MoyGcc/IPNet | dc6c48aa528342d0269cde229d5e1f3a158186a6 | [
"Unlicense"
] | null | null | null | '''
Takes in smplx parms and initialises a smplx object with optimizable params.
class th_SMPL currently does not take batch dim.
If code works:
Author: Bharat
else:
Author: Anonymous
'''
import numpy as np
import torch
import torch.nn as nn
from lib.smpl_layer import SMPL_Layer
from lib.body_objectives import torch_pose_obj_data
from lib.torch_functions import batch_sparse_dense_matmul
from lib.smplx.body_models import SMPLX
# binary_mask = torch.zeros((1, 10475, 3)).cuda()
# rest_idx = np.load('/home/chen/IPNet_SMPLX/assets/rest_idx.npy')
# binary_mask[:,rest_idx,:] = 1.
face_idx = np.load('/home/chen/Downloads/smplx_mano_flame_correspondences/SMPL-X__FLAME_vertex_ids.npy')
hand_idx = np.load('/home/chen/IPNet_SMPLX/assets/hand_idx.npy')
num_pca_comps=6
binary_mask = torch.ones((1, 10475, 3)).cuda()
binary_mask[:,face_idx,:] = 0.
binary_mask[:,hand_idx,:] = 0.
class th_batch_SMPLX_split_params(nn.Module):
"""
Alternate implementation of th_batch_SMPL that allows us to independently optimise:
1. global_pose
2. remaining body_pose
3. top betas (primarly adjusts bone lengths)
4. other betas
"""
def __init__(self, batch_sz=1, top_betas=None, other_betas=None, global_pose=None, body_pose=None,
left_hand_pose=None, right_hand_pose=None, trans=None,
expression=None, jaw_pose=None, leye_pose=None, reye_pose=None,
offsets=None, faces=None, gender='male'):
super(th_batch_SMPLX_split_params, self).__init__()
if top_betas is None:
self.top_betas = nn.Parameter(torch.zeros(batch_sz, 2))
else:
assert top_betas.ndim == 2
self.top_betas = nn.Parameter(top_betas)
if other_betas is None:
self.other_betas = nn.Parameter(torch.zeros(batch_sz, 8))
else:
assert other_betas.ndim == 2
self.other_betas = nn.Parameter(other_betas)
if global_pose is None:
self.global_pose = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert global_pose.ndim == 2
self.global_pose = nn.Parameter(global_pose)
if body_pose is None:
self.body_pose = nn.Parameter(torch.zeros(batch_sz, 63))
else:
assert body_pose.ndim == 2
self.body_pose = nn.Parameter(body_pose)
if left_hand_pose is None:
self.left_hand_pose = nn.Parameter(torch.zeros(batch_sz, 45))
else:
assert left_hand_pose.ndim == 2
self.left_hand_pose = nn.Parameter(left_hand_pose)
if right_hand_pose is None:
self.right_hand_pose = nn.Parameter(torch.zeros(batch_sz, 45))
else:
assert right_hand_pose.ndim == 2
self.right_hand_pose = nn.Parameter(right_hand_pose)
if trans is None:
self.trans = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert trans.ndim == 2
self.trans = nn.Parameter(trans)
if expression is None:
self.expression = nn.Parameter(torch.zeros(batch_sz, 10))
else:
assert expression.ndim == 2
self.expression = nn.Parameter(expression)
if jaw_pose is None:
self.jaw_pose = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert jaw_pose.ndim == 2
self.jaw_pose = nn.Parameter(jaw_pose)
if leye_pose is None:
self.leye_pose = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert leye_pose.ndim == 2
self.leye_pose = nn.Parameter(leye_pose)
if reye_pose is None:
self.reye_pose = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert reye_pose.ndim == 2
self.reye_pose = nn.Parameter(torch.zeros(batch_sz, 3))
if offsets is None:
self.offsets = nn.Parameter(torch.zeros(batch_sz, 10475, 3))
else:
assert offsets.ndim == 3
self.offsets = nn.Parameter(offsets)
self.betas = torch.cat([self.top_betas, self.other_betas], axis=1)
# self.pose = torch.cat([self.global_pose, self.body_pose], axis=1)
self.faces = faces
self.gender = gender
# pytorch smplx
self.smplx = SMPLX(model_path="/home/chen/SMPLX/models/smplx", batch_size=batch_sz, gender=gender)
# Landmarks
self.body25_reg_torch, self.face_reg_torch, self.hand_reg_torch = torch_pose_obj_data(batch_size=batch_sz)
def forward(self):
self.betas = torch.cat([self.top_betas, self.other_betas], axis=1)
# self.pose = torch.cat([self.global_pose, self.body_pose], axis=1)
output = self.smplx(betas = self.betas,
global_orient = self.global_pose,
body_pose = self.body_pose,
left_hand_pose = self.left_hand_pose[:, :num_pca_comps],
right_hand_pose = self.right_hand_pose[:, :num_pca_comps],
transl = self.trans,
expression = self.expression,
jaw_pose = self.jaw_pose,
leye_pose = self.leye_pose,
reye_pose = self.reye_pose,
displacement=self.offsets)
# return verts, jtr, tposed, naked
return output.vertices
def get_landmarks(self):
"""Computes body25 joints for SMPL along with hand and facial landmarks"""
verts, _, _, _ = self.smplx(self.pose,
th_betas=self.betas,
th_trans=self.trans,
th_offsets=self.offsets)
J = batch_sparse_dense_matmul(self.body25_reg_torch, verts)
face = batch_sparse_dense_matmul(self.face_reg_torch, verts)
hands = batch_sparse_dense_matmul(self.hand_reg_torch, verts)
return J, face, hands
class th_batch_SMPLX(nn.Module):
def __init__(self, batch_sz=1, betas=None, global_pose=None, body_pose=None,
left_hand_pose=None, right_hand_pose=None, trans=None,
expression=None, jaw_pose=None, leye_pose=None, reye_pose=None,
offsets=None, faces=None, gender='male'):
super(th_batch_SMPLX, self).__init__()
if betas is None:
self.betas = nn.Parameter(torch.zeros(batch_sz, 10))
else:
assert betas.ndim == 2
self.betas = nn.Parameter(betas)
if global_pose is None:
self.global_pose = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert global_pose.ndim == 2
self.global_pose = nn.Parameter(global_pose)
if body_pose is None:
self.body_pose = nn.Parameter(torch.zeros(batch_sz, 63))
else:
assert body_pose.ndim == 2
self.body_pose = nn.Parameter(body_pose)
if left_hand_pose is None:
self.left_hand_pose = nn.Parameter(torch.zeros(batch_sz, 45))
else:
assert left_hand_pose.ndim == 2
self.left_hand_pose = nn.Parameter(left_hand_pose)
if right_hand_pose is None:
self.right_hand_pose = nn.Parameter(torch.zeros(batch_sz, 45))
else:
assert right_hand_pose.ndim == 2
self.right_hand_pose = nn.Parameter(right_hand_pose)
if trans is None:
self.trans = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert trans.ndim == 2
self.trans = nn.Parameter(trans)
if expression is None:
self.expression = nn.Parameter(torch.zeros(batch_sz, 10))
else:
assert expression.ndim == 2
self.expression = nn.Parameter(expression)
if jaw_pose is None:
self.jaw_pose = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert jaw_pose.ndim == 2
self.jaw_pose = nn.Parameter(jaw_pose)
if leye_pose is None:
self.leye_pose = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert leye_pose.ndim == 2
self.leye_pose = nn.Parameter(leye_pose)
if reye_pose is None:
self.reye_pose = nn.Parameter(torch.zeros(batch_sz, 3))
else:
assert reye_pose.ndim == 2
self.reye_pose = nn.Parameter(torch.zeros(batch_sz, 3))
if offsets is None:
self.offsets = nn.Parameter(torch.zeros(batch_sz, 10475,3))
else:
assert offsets.ndim == 3
self.offsets = nn.Parameter(offsets)
self.faces = faces
self.gender = gender
# self.pose = torch.cat([self.global_pose, self.body_pose], axis=1)
# pytorch smplx
self.smplx = SMPLX(model_path="/home/chen/SMPLX/models/smplx", batch_size=batch_sz, gender=gender)
# Landmarks
self.body25_reg_torch, self.face_reg_torch, self.hand_reg_torch = torch_pose_obj_data(batch_size=batch_sz)
def forward(self):
# self.pose = torch.cat([self.global_pose, self.body_pose], axis=1)
output = self.smplx(betas = self.betas,
global_orient = self.global_pose,
body_pose = self.body_pose,
left_hand_pose = self.left_hand_pose[:, :num_pca_comps],
right_hand_pose = self.right_hand_pose[:, :num_pca_comps],
transl = self.trans,
expression = self.expression,
jaw_pose = self.jaw_pose,
leye_pose = self.leye_pose,
reye_pose = self.reye_pose,
displacement=self.offsets)
# return verts, jtr, tposed, naked
return output.vertices
def get_vertices_clean_hand(self):
self.offsets_clean_hand = self.offsets.detach().clone()
self.offsets_clean_hand = self.offsets_clean_hand * binary_mask
output = self.smplx(betas = self.betas,
global_orient = self.global_pose,
body_pose = self.body_pose,
left_hand_pose = self.left_hand_pose[:, :num_pca_comps],
right_hand_pose = self.right_hand_pose[:, :num_pca_comps],
transl = self.trans,
expression = self.expression,
jaw_pose = self.jaw_pose,
leye_pose = self.leye_pose,
reye_pose = self.reye_pose,
displacement=self.offsets_clean_hand)
return output.vertices
def get_landmarks(self):
"""Computes body25 joints for SMPL along with hand and facial landmarks"""
verts, _, _, _ = self.smplx(self.pose,
th_betas=self.betas,
th_trans=self.trans,
th_offsets=self.offsets)
J = batch_sparse_dense_matmul(self.body25_reg_torch, verts)
face = batch_sparse_dense_matmul(self.face_reg_torch, verts)
hands = batch_sparse_dense_matmul(self.hand_reg_torch, verts)
return J, face, hands
class th_SMPLX(nn.Module):
def __init__(self, betas=None, global_pose=None, body_pose=None,
left_hand_pose=None, right_hand_pose=None, trans=None,
expression=None, jaw_pose=None, leye_pose=None, reye_pose=None,
offsets=None, gender='male'):
super(th_SMPLX, self).__init__()
if betas is None:
self.betas = nn.Parameter(torch.zeros(10,))
else:
self.betas = nn.Parameter(betas)
if global_pose is None:
self.global_pose = nn.Parameter(torch.zeros(3,))
else:
self.global_pose = nn.Parameter(global_pose)
if body_pose is None:
self.body_pose = nn.Parameter(torch.zeros(63,))
else:
self.body_pose = nn.Parameter(body_pose)
if left_hand_pose is None:
self.left_hand_pose = nn.Parameter(torch.zeros(45,))
else:
self.left_hand_pose = nn.Parameter(left_hand_pose)
if right_hand_pose is None:
self.right_hand_pose = nn.Parameter(torch.zeros(45,))
else:
self.right_hand_pose = nn.Parameter(right_hand_pose)
if trans is None:
self.trans = nn.Parameter(torch.zeros(3,))
else:
self.trans = nn.Parameter(trans)
if expression is None:
self.expression = nn.Parameter(torch.zeros(10,))
else:
self.expression = nn.Parameter(expression)
if jaw_pose is None:
self.jaw_pose = nn.Parameter(torch.zeros(3,))
else:
self.jaw_pose = nn.Parameter(jaw_pose)
if leye_pose is None:
self.leye_pose = nn.Parameter(torch.zeros(3,))
else:
self.leye_pose = nn.Parameter(leye_pose)
if reye_pose is None:
self.reye_pose = nn.Parameter(torch.zeros(3,))
else:
self.reye_pose = nn.Parameter(reye_pose)
if offsets is None:
self.offsets = nn.Parameter(torch.zeros(10475,3))
else:
self.offsets = nn.Parameter(offsets)
self.pose = torch.cat([self.global_pose, self.body_pose], axis=0)
## pytorch smplx
self.smplx = SMPLX(model_path="/home/chen/SMPLX/models/smplx", batch_size=batch_size, gender=gender)
def forward(self):
self.pose = torch.cat([self.global_pose, self.body_pose], axis=0)
verts, jtr, tposed, naked = self.smplx(betas = self.betas.unsqueeze(axis=0),
global_orient = self.global_pose.unsqueeze(axis=0),
body_pose = self.body_pose.unsqueeze(axis=0),
left_hand_pose = self.left_hand_pose.unsqueeze(axis=0),
right_hand_pose = self.right_hand_pose.unsqueeze(axis=0),
transl = self.trans.unsqueeze(axis=0),
expression = self.expression.unsqueeze(axis=0),
jaw_pose = self.jaw_pose.unsqueeze(axis=0),
leye_pose = self.leye_pose.unsqueeze(axis=0),
reye_pose = self.reye_pose.unsqueeze(axis=0),
displacement=self.offsets.unsqueeze(axis=0))
return verts[0]
| 45.507645 | 138 | 0.578053 | 1,861 | 14,881 | 4.377754 | 0.083289 | 0.091813 | 0.077329 | 0.092795 | 0.855407 | 0.817847 | 0.790598 | 0.773291 | 0.762244 | 0.758439 | 0 | 0.014407 | 0.328338 | 14,881 | 326 | 139 | 45.647239 | 0.8007 | 0.070425 | 0 | 0.774545 | 0 | 0 | 0.016195 | 0.015323 | 0 | 0 | 0 | 0 | 0.083636 | 1 | 0.032727 | false | 0 | 0.025455 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d25593971ecbd69809eae380172e4d87e008ed7f | 129 | py | Python | src/models/__init__.py | AC297R-Wayfair-NLP/wayfair_nlp_public | 9d3628f14f0e3e75f386a2c2f63aed93681ef686 | [
"MIT"
] | 2 | 2021-01-11T21:16:54.000Z | 2021-12-16T16:39:14.000Z | src/models/__init__.py | AC297R-Wayfair-NLP/wayfair_nlp_public | 9d3628f14f0e3e75f386a2c2f63aed93681ef686 | [
"MIT"
] | null | null | null | src/models/__init__.py | AC297R-Wayfair-NLP/wayfair_nlp_public | 9d3628f14f0e3e75f386a2c2f63aed93681ef686 | [
"MIT"
] | null | null | null | from ._neural_net_regressor import NN_Regressor
from ._rf_regressor import RF_Regressor
from ._rf_classifier import RF_Classifier | 43 | 47 | 0.891473 | 19 | 129 | 5.526316 | 0.421053 | 0.285714 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085271 | 129 | 3 | 48 | 43 | 0.889831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
96291b1df88b346b7c3562b762c22dc7251fc054 | 78,396 | py | Python | cogs/economy_ko.py | PLM912/Keter | 0dc24d33732fccf36406010e2f8e1519ecdfa4b1 | [
"MIT"
] | null | null | null | cogs/economy_ko.py | PLM912/Keter | 0dc24d33732fccf36406010e2f8e1519ecdfa4b1 | [
"MIT"
] | null | null | null | cogs/economy_ko.py | PLM912/Keter | 0dc24d33732fccf36406010e2f8e1519ecdfa4b1 | [
"MIT"
] | null | null | null | import discord
import time
import psutil
import os
import asyncio
import openpyxl
import random
import math
import numpy as np
import datetime
import matplotlib.pyplot as plt
from datetime import datetime
from discord.ext import commands
from evs import default, permissions
userlib = "./lib/economy/users/"
stocklib = "./lib/economy/stocks/"
cachelib = "./lib/cache/"
categories = ["농업", "목축업", "광업", "제조업", "인프라설계업", "운송업", "언론", "금융", "방위산업", "교육", "의료", "중공업", "전자산업", "대행업", "게임", "IT", "복합"]
def keundon(value: int):
value = int(value)
if value < 0:
return "변수는 음수값을 가질 수 없습니다."
elif 0 <= value < 10000:
return str(value)
elif 10000 <= value < 100000000:
return str(math.floor(value / 10000)) + "만 " + str(value - math.floor(value / 10000) * 10000)
elif 100000000 <= value < 1000000000000:
return str(math.floor(value / 100000000)) + "억 " + str(math.floor(value / 10000) - math.floor(value / 100000000) * 10000) + "만 " + str(value - math.floor(value / 10000) * 10000)
elif 1000000000000 <= value < 10000000000000000:
return str(math.floor(value / 1000000000000)) + "조 " + str(math.floor(value / 100000000) - math.floor(value / 1000000000000) * 10000) + "억 " + str(math.floor(value / 10000) - math.floor(value / 100000000) * 10000) + "만 " + str(value - math.floor(value / 10000) * 10000)
elif 10000000000000000 <= value < 100000000000000000000:
return str(math.floor(value / 10000000000000000)) + "경 " + str(math.floor(value / 1000000000000) - math.floor(value / 10000000000000000) * 10000) + "조 " + str(math.floor(value / 100000000) - math.floor(value / 1000000000000) * 10000) + "억 " + str(math.floor(value / 10000) - math.floor(value / 100000000) * 10000) + "만 " + str(value - math.floor(value / 10000) * 10000)
else:
return "변수의 크기가 너무 큽니다."
class economy_ko(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.config = default.get("config.json")
self.process = psutil.Process(os.getpid())
# 폴더생성
if os.path.isdir("./lib/economy/users"):
print("user folder exist")
else:
os.makedirs("./lib/economy/users")
if os.path.isdir("./lib/economy/stocks"):
print("stocks folder exist")
else:
os.makedirs("./lib/economy/stocks")
# 메시지당 돈
@commands.Cog.listener()
async def on_message(self, ctx):
if ctx.guild.id == 749595288280498188:
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
randomnum = random.randrange(10, 30)
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
suvmoney = int(ws.cell(row=1, column=2).value)
suvmoney = suvmoney + randomnum
ws.cell(row=1, column=2).value = str(suvmoney)
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
# 참여
@commands.command()
async def 참여(self, ctx):
embed = discord.Embed(title="케테르 경제", description="케테르 경제에 참여하시겠습니까?", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
msg = await ctx.send(embed=embed)
def reaction_check_(m):
if m.message_id == msg.id and m.user_id == ctx.author.id and str(m.emoji) == "✅":
return True
return False
try:
await msg.add_reaction("✅")
await self.bot.wait_for('raw_reaction_add', timeout=10.0, check=reaction_check_)
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
embed = discord.Embed(title="케테르 경제", description="이미 참여하셨습니다.", color=0xeff0f1)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="케테르 경제",
description="새로 오셨군요? " + str(ctx.author.name) + "님을 위한 파일들을 생성중이에요!",
color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752690012369190942/DARK_KETER_1.png")
await ctx.send(embed=embed)
wb = openpyxl.Workbook()
ws = wb.active
ws.cell(row=1, column=1).value = "Hello World" #:)
ws.cell(row=1, column=2).value = "8600000" # money
ws.cell(row=1, column=3).value = "0" # pres
ws.cell(row=1, column=4).value = "-" # rank
ws.cell(row=1, column=5).value = "0" # timesleep
ws.cell(row=1, column=6).value = "0" # gamblesleep
ws.cell(row=2, column=1).value = "None" # status
ws.cell(row=2, column=2).value = "0" # perfect
ws.cell(row=2, column=3).value = "0" # great
ws.cell(row=2, column=4).value = "0" # good
ws.cell(row=2, column=5).value = "0" # bad
ws.cell(row=3, column=1).value = "0" # tsucc
ws.cell(row=3, column=2).value = "0" # tfail
ws.cell(row=3, column=3).value = "0" # fails
ws.cell(row=4, column=1).value = "0" # home count
ws.cell(row=4, column=2).value = "[1]" # title
ws.cell(row=4, column=3).value = "1" # header
ws.cell(row=4, column=4).value = "1" # tail
ws.cell(row=5, column=1).value = "100" # HP
ws.cell(row=5, column=2).value = "100" # STR
ws.cell(row=5, column=3).value = "100" # DEF
ws.cell(row=5, column=4).value = "100" # INT
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
time.sleep(1)
embed = discord.Embed(title="케테르 경제",
description=str(ctx.author.name) + " 생성 완료!",
color=0xeff0f1)
await ctx.send(embed=embed)
except asyncio.TimeoutError:
await msg.delete()
embed = discord.Embed(title="케테르 경제", description="서명하지 않으셨습니다. 다음 기회에..", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752690012369190942/DARK_KETER_1.png")
await ctx.send(embed=embed)
except discord.Forbidden:
embed = discord.Embed(title="케테르 경제", description="케테르 경제에 참여하시겠습니까?", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await msg.edit(content=embed)
@commands.command(aliases=['돈내놔', '돈줘'])
async def 돈받기(self, ctx):
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
num = random.randrange(100, 120)
jackpot = random.random()
if jackpot < 0.001:
num = num * 100000
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
getmoney = ws.cell(row=1, column=2).value
getmoney = int(getmoney) + int(num)
ws.cell(row=1, column=2).value = str(getmoney)
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
embed = discord.Embed(title="KET", description="<@" + str(ctx.author.id) + "> " + str(
num) + "<:ket:753449741186105375>을 받았어요!", color=0xeff0f1)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="먼저 ``.참여``를 입력해서 케테르 경제에 참여해주세요!", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command()
async def 돈(self, ctx):
if (ctx.message.mentions.__len__() > 0):
for user in ctx.message.mentions:
if os.path.isfile(userlib + str(user.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(user.id) + ".xlsx")
ws = wb.active
money = ws.cell(row=1, column=2).value
wb.close()
kundon = keundon(int(money))
embed = discord.Embed(title="KET", description="<@" + str(
user.id) + ">님은 " + kundon + "<:ket:753449741186105375>을 가지고 계십니다!", color=0xeff0f1)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="유저가 ``케테르 경제``에 참여하지 않았어요..", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
else:
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
money = ws.cell(row=1, column=2).value
wb.close()
kundon = keundon(int(money))
embed = discord.Embed(title="KET", description="<@" + str(
ctx.author.id) + "> " + kundon + "<:ket:753449741186105375>을 가지고 계십니다!", color=0xeff0f1)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="먼저 ``.참여``를 입력해서 케테르 경제에 참여해주세요!", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['프리스티지', '프레스티지', 'ㅎㅍ'])
async def 호프(self, ctx):
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
prestige = ws.cell(row=1, column=3).value
wb.close()
embed = discord.Embed(title="PRESTIGE", description="<@" + str(ctx.author.id) + "> " + str(
prestige) + "<:pre:753458787465297993>을 가지고 계십니다!", color=0xeff0f1)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="먼저 ``.참여``를 입력해서 케테르 경제에 참여해주세요!", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['ㄷㅂ'])
async def 도박(self, ctx, val: int):
if val <= 0:
embed = discord.Embed(title="NO", description="0 이하로는 베팅할 수 없어요.", color=0xeff0f1)
await ctx.send(embed=embed)
return None
if val > 80000000000:
embed = discord.Embed(title="NO", description="베팅금은 800억 을 초과할 수 없어요.", color=0xeff0f1)
await ctx.send(embed=embed)
return None
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
money = ws.cell(row=1, column=2).value
ti = ws.cell(row=1, column=6).value
if float(ti) > (time.time() - 60):
next = float(ti) + 60 - round(time.time())
embed = discord.Embed(title="NO", description="현재 **" + ctx.author.name + "**님은 도박이 불가능 합니다.", color=0xeff0f1)
if next > 59:
embed.set_footer(text="다음 도박 가능까지 " + str(math.floor(next/60)) + "분 " + str(round((round(next)/60 - math.floor(next/60))*60)) + "초")
else:
embed.set_footer(text="다음 도박 가능까지 " + str(round(next)) + "초")
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if int(money) > val:
discrim = random.random()
enjail = random.random()
if enjail < 0.0001:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - val)
ws.cell(row=3, column=3).value = "0"
ws.cell(row=1, column=5).value = str(time.time() + 259200)
ws.cell(row=1, column=6).value = str(time.time() + 259200)
embed = discord.Embed(title="KMF", description="<@" + str(ctx.author.id) + "> 당신은 불법 도박죄로 기소되었습니다. 최종판결은 다음과 같습니다 : 징역 72시간", color=0xeff0f1)
return await ctx.send(embed=embed)
if enjail < 0.0005:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - val)
ws.cell(row=3, column=3).value = "0"
ws.cell(row=1, column=5).value = str(time.time() + 86400)
ws.cell(row=1, column=6).value = str(time.time() + 86400)
embed = discord.Embed(title="KMF", description="<@" + str(ctx.author.id) + "> 당신은 불법 도박죄로 기소되었습니다. 최종판결은 다음과 같습니다 : 징역 24시간", color=0xeff0f1)
return await ctx.send(embed=embed)
if enjail < 0.001:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - val)
ws.cell(row=3, column=3).value = "0"
ws.cell(row=1, column=5).value = str(time.time() + 21600)
ws.cell(row=1, column=6).value = str(time.time() + 21600)
embed = discord.Embed(title="KMF", description="<@" + str(ctx.author.id) + "> 당신은 불법 도박죄로 기소되었습니다. 최종판결은 다음과 같습니다 : 징역 6시간", color=0xeff0f1)
return await ctx.send(embed=embed)
if discrim < 0.02:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + 11 * val)
ws.cell(row=3, column=3).value = "0"
embed = discord.Embed(title="도박", description="<@" + str(
ctx.author.id) + "> " + "축하합니다! 대박이 나서 12배를 획득 하셨어요! 🎉\n획득량:" + str(
12 * val) + " <:ket:753449741186105375>", color=0xeff0f1)
elif 0.02 < discrim < 0.05 + math.sqrt(int(ws.cell(row=3, column=3).value) * 100) / 100:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + 2 * val)
ws.cell(row=3, column=3).value = "0"
embed = discord.Embed(title="도박", description="<@" + str(
ctx.author.id) + "> " + "축하합니다! 도박에 성공하셔서 3배를 획득 하셨어요! 🎉\n획득량:" + str(
3 * val) + " <:ket:753449741186105375>", color=0xeff0f1)
elif 0.05 + math.sqrt(int(ws.cell(row=3, column=3).value) * 100) / 100 < discrim < 0.1 + math.sqrt(
int(ws.cell(row=3, column=3).value) * 100) / 50:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + val)
ws.cell(row=3, column=3).value = "0"
embed = discord.Embed(title="도박", description="<@" + str(
ctx.author.id) + "> " + "축하합니다! 도박에 성공하셔서 2배를 획득 하셨어요! 🎉\n획득량:" + str(
2 * val) + " <:ket:753449741186105375>", color=0xeff0f1)
else:
emj = "<:dar:754345236574109716>"
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - val)
ws.cell(row=3, column=3).value = str(int(ws.cell(row=3, column=3).value) + 1)
embed = discord.Embed(title="도박", description="<@" + str(
ctx.author.id) + "> " + "도박에 실패하여 돈을 잃으셨습니다. " + emj, color=0xeff0f1)
ws.cell(row=1, column=6).value = str(time.time())
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="보유하신 잔액보다 큰 금액을 베팅할 수는 없어요.", color=0xeff0f1)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="먼저 ``.참여``를 입력해서 케테르 경제에 참여해주세요!", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['ㅇㅇ'])
async def 올인(self, ctx):
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
val = int(ws.cell(row=1, column=2).value)
ti = ws.cell(row=1, column=6).value
if float(ti) > (time.time() - 60):
next = float(ti) + 60 - round(time.time())
embed = discord.Embed(title="NO", description="현재 **" + ctx.author.name + "**님은 도박이 불가능 합니다.", color=0xeff0f1)
if next > 59:
embed.set_footer(text="다음 도박 가능까지 " + str(math.floor(next/60)) + "분 " + str(round((round(next)/60 - math.floor(next/60))*60)) + "초")
else:
embed.set_footer(text="다음 도박 가능까지 " + str(round(next)) + "초")
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if val > 80000000000:
embed = discord.Embed(title="NO", description="전재산이 800억을 초과하여 올인을 사용하실 수 없습니다.", color=0xeff0f1)
await ctx.send(embed=embed)
return None
discrim = random.random()
enjail = random.random()
if enjail < 0.0001:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - val)
ws.cell(row=3, column=3).value = "0"
ws.cell(row=1, column=5).value = str(time.time() + 259200)
ws.cell(row=1, column=6).value = str(time.time() + 259200)
embed = discord.Embed(title="KMF", description="<@" + str(ctx.author.id) + "> 당신은 불법 도박죄로 기소되었습니다. 최종판결은 다음과 같습니다 : 징역 72시간", color=0xeff0f1)
return await ctx.send(embed=embed)
if enjail < 0.0005:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - val)
ws.cell(row=3, column=3).value = "0"
ws.cell(row=1, column=5).value = str(time.time() + 86400)
ws.cell(row=1, column=6).value = str(time.time() + 86400)
embed = discord.Embed(title="KMF", description="<@" + str(ctx.author.id) + "> 당신은 불법 도박죄로 기소되었습니다. 최종판결은 다음과 같습니다 : 징역 24시간", color=0xeff0f1)
return await ctx.send(embed=embed)
if enjail < 0.001:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - val)
ws.cell(row=3, column=3).value = "0"
ws.cell(row=1, column=5).value = str(time.time() + 21600)
ws.cell(row=1, column=6).value = str(time.time() + 21600)
embed = discord.Embed(title="KMF", description="<@" + str(ctx.author.id) + "> 당신은 불법 도박죄로 기소되었습니다. 최종판결은 다음과 같습니다 : 징역 6시간", color=0xeff0f1)
return await ctx.send(embed=embed)
if discrim < 0.02:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) * 12)
ws.cell(row=3, column=3).value = "0"
embed = discord.Embed(title="올인", description="<@" + str(
ctx.author.id) + "> " + "축하합니다! 대박이 나서 12배를 획득 하셨어요! 🎉\n획득량:" + str(
12 * val) + " <:ket:753449741186105375>", color=0xeff0f1)
elif 0.02 < discrim < 0.05 + math.sqrt(int(ws.cell(row=3, column=3).value) * 100) / 100:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) * 3)
ws.cell(row=3, column=3).value = "0"
embed = discord.Embed(title="올인", description="<@" + str(
ctx.author.id) + "> " + "축하합니다! 올인에 성공하셔서 3배를 획득 하셨어요! 🎉\n획득량:" + str(
3 * val) + " <:ket:753449741186105375>", color=0xeff0f1)
elif 0.05 + math.sqrt(int(ws.cell(row=3, column=3).value) * 100) / 100 < discrim < 0.1 + math.sqrt(
int(ws.cell(row=3, column=3).value) * 100) / 50:
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) * 2)
ws.cell(row=3, column=3).value = "0"
embed = discord.Embed(title="올인", description="<@" + str(
ctx.author.id) + "> " + "축하합니다! 올인에 성공하셔서 2배를 획득 하셨어요! 🎉\n획득량:" + str(
2 * val) + " <:ket:753449741186105375>", color=0xeff0f1)
else:
emj = "<:dar:754345236574109716>"
ws.cell(row=1, column=2).value = "0"
ws.cell(row=3, column=3).value = str(int(ws.cell(row=3, column=3).value) + 1)
embed = discord.Embed(title="도박", description="올인에 실패하여 전재산을 잃으셨습니다. " + emj, color=0xeff0f1)
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="먼저 ``.참여``를 입력해서 케테르 경제에 참여해주세요!", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command()
async def 송금(self, ctx, mention: str, valu: int):
if (ctx.message.mentions.__len__() > 0):
for user in ctx.message.mentions:
if os.path.isfile(userlib + str(user.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
money = int(ws.cell(row=1, column=2).value)
if int(money) > valu:
wb2 = openpyxl.load_workbook(userlib + str(user.id) + ".xlsx")
ws2 = wb2.active
money2 = int(ws2.cell(row=1, column=2).value)
money2 = money2 + round(valu * 92 / 100)
ws2.cell(row=1, column=2).value = str(money2)
wb2.save(userlib + str(user.id) + ".xlsx")
wb2.close()
money = money - valu
ws.cell(row=1, column=2).value = str(money)
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
embed = discord.Embed(title="송금", description="<@" + str(ctx.author.id) + "> " + str(
round(valu * 92 / 100)) + " <:ket:753449741186105375>" + "송금 완료(세율 8%)", color=0xeff0f1)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="보유하신 잔액보다 큰 금액을 송금할 수는 없어요.", color=0xeff0f1)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="유저가 ``케테르 경제``에 참여하지 않았어요..", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command()
@commands.check(permissions.is_owner)
async def 초기화(self, ctx):
file_list = os.listdir(userlib)
file_list = [file for file in file_list if file.endswith(".xlsx")]
for i in range(len(file_list)):
wb = openpyxl.load_workbook(userlib + file_list[i])
ws = wb.active
if int(ws.cell(row=1, column=3).value) <= 1000:
ws.cell(row=1, column=3).value = str(
int(ws.cell(row=1, column=3).value) + math.floor(int(ws.cell(row=1, column=2).value) / 1000000000))
else:
ws.cell(row=1, column=3).value = str(round(int(ws.cell(row=1, column=3).value) / 2) + math.ceil(
int(ws.cell(row=1, column=2).value) / 1000000000))
ws.cell(row=1, column=1).value = "Hello World" #:)
ws.cell(row=1, column=2).value = "8600000" # money
ws.cell(row=1, column=4).value = "-" # rank
ws.cell(row=1, column=5).value = "0" # timesleep
ws.cell(row=1, column=6).value = "0" # gamblesleep
ws.cell(row=2, column=1).value = "None" # status
ws.cell(row=2, column=2).value = "0" # perfect
ws.cell(row=2, column=3).value = "0" # great
ws.cell(row=2, column=4).value = "0" # good
ws.cell(row=2, column=5).value = "0" # bad
for j in range(1, 101):
ws.cell(row=6, column=j).value = None # stocks
ws.cell(row=7, column=j).value = None # stocks
wb.save(userlib + file_list[i])
wb.close()
embed = discord.Embed(title="Admin", description="초기화 완료", color=0xeff0f1)
await ctx.send(embed=embed)
@commands.command()
@commands.check(permissions.is_owner)
async def 돈추가(self, ctx, mention: str, value: int):
if (ctx.message.mentions.__len__() > 0):
for user in ctx.message.mentions:
if os.path.isfile(userlib + str(user.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(user.id) + ".xlsx")
ws = wb.active
money = ws.cell(row=1, column=2).value
money = int(money) + value
ws.cell(row=1, column=2).value = money
wb.save(userlib + str(user.id) + ".xlsx")
wb.close()
embed = discord.Embed(title="KET", description=str(money) + "<:ket:753449741186105375> 추가 완료",
color=0xeff0f1)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="유저가 ``케테르 경제``에 참여하지 않았어요..", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['회사등록'])
@commands.check(permissions.is_owner)
async def 상장(self, ctx, name: str, stocks: int, price: int, sales: int, ratio: float, business: int):
name = name.replace("_", " ")
if os.path.isfile(stocklib + name + ".xlsx"):
embed = discord.Embed(title="KMF", description="이미 상장된 기업입니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
wb = openpyxl.Workbook()
ws = wb.active
ws.cell(row=1, column=1).value = str(int(stocks)) # 최대주
ws.cell(row=1, column=2).value = "0" # 매매된 주
ws.cell(row=1, column=3).value = "1" # 최근 거래 위치
ws.cell(row=1, column=4).value = str(int(sales)) # 매출
ws.cell(row=1, column=5).value = str(float(ratio)) # 수익률
ws.cell(row=1, column=6).value = str(int(business)) # 업종
ws.cell(row=2, column=1).value = str(int(price)) # 초기가
for i in range(2, 101):
ws.cell(row=2, column=i).value = str(int(price)) # 초기설정
wb.save(stocklib + name + ".xlsx")
wb.close()
time.sleep(1)
embed = discord.Embed(title="KMF", description=name + "사 상장 완료!", color=0xeff0f1)
await ctx.send(embed=embed)
@commands.command(aliases=['회사삭제'])
@commands.check(permissions.is_owner)
async def 상장폐지(self, ctx, name: str):
name = name.replace("_", " ")
if os.path.isfile(stocklib + name + ".xlsx"):
os.remove(stocklib + name + ".xlsx")
embed = discord.Embed(title="KMF", description="해당 기업을 상장폐지 하였습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
embed = discord.Embed(title="KMF", description=name + "는 없는 회사명입니다.", color=0xeff0f1)
await ctx.send(embed=embed)
@commands.command()
@commands.check(permissions.is_owner)
async def 업종(self, ctx):
embed = discord.Embed(title="KMF", description="업종코드와 내용", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
for i in range(0, len(categories)):
embed.add_field(name="code : " + str(i), value=categories[i])
await ctx.send(embed=embed)
@commands.command(aliases=['회사정보'])
async def 회사(self, ctx, name: str):
name = name.replace("_", " ")
if os.path.isfile(stocklib + name + ".xlsx"):
wb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ws = wb.active
stoks = ws.cell(row=1, column=1).value
sold = ws.cell(row=1, column=2).value
last = ws.cell(row=1, column=3).value
sales = ws.cell(row=1, column=4).value
ratio = ws.cell(row=1, column=5).value
business = ws.cell(row=1, column=6).value
price = ws.cell(row=2, column=int(last)).value
if last == "1":
prece = ws.cell(row=2, column=100).value
else:
prece = ws.cell(row=2, column=int(last) - 1).value
wb.close()
siga = keundon(int(price) * int(stoks))
perc = round(int(price) * 100 / int(prece) - 100, 2)
if perc > 0:
icon = ":small_red_triangle:"
else:
icon = ":small_red_triangle_down:"
embed = discord.Embed(title=name, color=0xeff0f1)
embed.add_field(name="시가총액", value=siga + " <:ket:753449741186105375>", inline=True)
embed.add_field(name="주가",
value=keundon(price) + " <:ket:753449741186105375> (" + icon + str(abs(perc)) + "%)", inline=True)
embed.add_field(name="매매중인 주", value=keundon(int(stoks) - int(sold)) + "주", inline=True)
embed.add_field(name="할양된 주", value=keundon(int(sold)) + "주", inline=True)
embed.add_field(name="매출", value=keundon(int(sales)) + " <:ket:753449741186105375>", inline=True)
embed.add_field(name="순이익",
value=keundon(round(int(sales) * float(ratio) / 100)) + " <:ket:753449741186105375>", inline=True)
embed.add_field(name="예상 배당금", value=keundon(
round(int(sales) / int(stoks) * float(ratio) / 100)) + " <:ket:753449741186105375>", inline=True)
embed.add_field(name="업종", value=categories[int(business)], inline=True)
await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="해당 이름의 회사를 찾기 못하였습니다", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['회사조작'])
@commands.check(permissions.is_owner)
async def 주식조작(self, ctx, name: str, item: str, val: int):
""" item 항목 : 주식총주, 주가, 매출, 수익률, 업종\n수익률의 변수 val은 10이 1%입니다. """
if os.path.isfile(stocklib + name + ".xlsx"):
wb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ws = wb.active
else:
embed = discord.Embed(title="NO", description="해당 이름의 회사를 찾지 못하였습니다", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
if item == "주식총주":
if val <= int(ws.cell(row=1, column=2).value):
embed = discord.Embed(title="NO", description="총수는 매매된 주보다 적은 수로 변경할 수 없습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
else:
ws.cell(row=1, column=1).value = str(val)
wb.save(stocklib + name + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 사(社)의 주식총수를 변경하였습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
if item == "주가":
last = ws.cell(row=1, column=3).value
if last == "100":
next = 1
else:
next = int(last) + 1
ws.cell(row=1, column=3).value = str(next)
ws.cell(row=2, column=next).value = str(val)
wb.save(stocklib + name + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 사(社)의 주가를 변경하였습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
if item == "매출":
ws.cell(row=1, column=4).value = str(val)
wb.save(stocklib + name + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 사(社)의 매출을 변경하였습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
if item == "수익률":
val = val / 10
if val > 100:
embed = discord.Embed(title="NO", description="수익률은 100(%)을 넘길 수 없습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
if val <= 0:
embed = discord.Embed(title="NO", description="수익률은 0(%)이하일 수 없습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
ws.cell(row=1, column=5).value = str(val)
wb.save(stocklib + name + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 사(社)의 수익률을 변경하였습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
if item == "업종":
if val > len(categories) - 1 or val < 0:
embed = discord.Embed(title="NO", description="변수가 잘못 설정되었습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
ws.cell(row=1, column=6).value = str(val)
wb.save(stocklib + name + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 사(社)의 업종을 변경하였습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
return None
embed = discord.Embed(title="NO", description="잘못된 변수 : " + item, color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['상장사'])
async def 회사목록(self, ctx, plist: int):
corps = os.listdir(stocklib)
embed = discord.Embed(title="KMF", color=0xeff0f1)
for i in range(0 + 10 * (plist - 1), 10 + 10 * (plist - 1)):
try:
embed.add_field(name=str(i + 1), value=corps[i].replace(".xlsx", ""))
except IndexError:
return await ctx.send(embed=embed)
await ctx.send(embed=embed)
@commands.command(aliases=['주식그래프'])
async def 주식(self, ctx, name: str):
if os.path.isfile(stocklib + name + ".xlsx"):
wb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ws = wb.active
last = ws.cell(row=1, column=3).value
prices = []
if last == "100":
for i in range(1, 101):
prices.append(int(ws.cell(row=2, column=i).value))
elif last == "1":
for i in range(2, 101):
prices.append(int(ws.cell(row=2, column=i).value))
prices.append(int(ws.cell(row=2, column=1).value))
else:
for i in range(int(last) + 1, 101):
prices.append(int(ws.cell(row=2, column=i).value))
for i in range(1, int(last) + 1):
prices.append(int(ws.cell(row=2, column=i).value))
plt.figure(figsize=(39, 18))
if prices[0] < prices[99]:
plt.step(list(range(1, 101)), prices, 'r-')
else:
plt.step(list(range(1, 101)), prices, 'b-')
plt.title(name, fontsize=64)
plt.xticks(fontsize=32)
plt.yticks(fontsize=32)
plt.xlabel('Recently', fontsize=44)
plt.ylabel('Price', fontsize=44)
plt.savefig(str(ctx.author.id) + ".png", dpi=192)
plt.clf()
plt.close()
wb.close()
await ctx.send(file=discord.File("./" + str(ctx.author.id) + ".png"))
os.remove(str(ctx.author.id) + '.png')
else:
embed = discord.Embed(title="NO", description="해당 이름의 회사를 찾지 못하였습니다", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['내주식', '보유주식', '주식통장'])
async def 보유주(self, ctx):
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
inteli = ws.cell(row=5, column=4).value
embed = discord.Embed(title="KMF", description="<@" + str(ctx.author.id) + ">님의 주식통장", color=0xeff0f1)
for i in range(1, math.ceil(int(inteli))):
if ws.cell(row=6, column=i).value == None:
pass
else:
started = ws.cell(row=8, column=i).value
embed.add_field(name=ws.cell(row=6, column=i).value, value=ws.cell(row=7, column=i).value + "주\n최근 구매가 : " + started + " <:ket:753449741186105375>")
wb.close()
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['매수'])
async def 주식구매(self, ctx, name: str, amount: int):
if amount <= 0:
embed = discord.Embed(title="NO", description="매매 주는 0주 이하일 수 없습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
money = ws.cell(row=1, column=2).value
inteli = ws.cell(row=5, column=4).value
ti = ws.cell(row=1, column=5).value
wb.close()
block = 0
for i in range(1, math.ceil(int(inteli))):
if ws.cell(row=6, column=i).value == name:
block = i
start = False
elif ws.cell(row=6, column=i).value == None:
block = i
start = True
if float(ti) > (time.time() - 360):
next = float(ti) + 360 - round(time.time())
embed = discord.Embed(title="NO", description="주식 거래 후 6분 동안은 추가 매매가 불가능 합니다.", color=0xeff0f1)
if next > 59:
embed.set_footer(text="다음 주식 거래 허가까지 " + str(math.floor(next/60)) + "분 " + str(round((round(next)/60 - math.floor(next/60))*60)) + "초")
else:
embed.set_footer(text="다음 주식 거래 허가까지 " + str(round(next)) + "초")
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if block == 0:
embed = discord.Embed(title="NO", description="보유할 수 있는 주식의 종류를 넘었어요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="``케테르 경제``에 참여하지 않았어요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(stocklib + name + ".xlsx"):
wb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ws = wb.active
stocks = ws.cell(row=1, column=1).value
sold = ws.cell(row=1, column=2).value
last = ws.cell(row=1, column=3).value
price = int(ws.cell(row=2, column=int(last)).value)
if int(stocks) - int(sold) < amount:
embed = discord.Embed(title="NO", description="구매하려는 주가 남지 않았습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
wb.close()
return await ctx.send(embed=embed)
if int(money) < price*amount:
embed = discord.Embed(title="NO", description="돈이 부족합니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
wb.close()
return await ctx.send(embed=embed)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + amount)
if last == "100":
ws.cell(row=2, column=1).value = str(round(int(ws.cell(row=2, column=100).value)*(0.995 + (amount**0.2)/100 + (random.random()-0.5)/800)))
ws.cell(row=1, column=3).value = "1"
else:
ws.cell(row=2, column=int(last) + 1).value = str(round(int(ws.cell(row=2, column=int(last)).value)*(0.995 + (amount**0.2)/100 + random.random()/800)))
ws.cell(row=1, column=3).value = str(int(last) + 1)
wb.save(stocklib + name + ".xlsx")
wb.close()
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
ws.cell(row=6, column=block).value = name
ws.cell(row=8, column=block).value = str(price)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - price*amount)
ws.cell(row=1, column=5).value = str(time.time())
if start == True:
ws.cell(row=7, column=block).value = str(amount)
else:
ws.cell(row=7, column=block).value = str(int(ws.cell(row=7, column=block).value) + amount)
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 주를 " + str(amount) + "주 만큼 구매하였습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['매각'])
async def 주식판매(self, ctx, name: str, amount: int):
if amount <= 0:
embed = discord.Embed(title="NO", description="매매 주는 0주 이하일 수 없습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
inteli = ws.cell(row=5, column=4).value
ti = ws.cell(row=1, column=5).value
wb.close()
block = 0
for i in range(1, math.ceil(int(inteli))):
if ws.cell(row=6, column=i).value == name:
block = i
start = False
if block == 0:
embed = discord.Embed(title="NO", description="해당 이름의 주식을 보유하고 계시지 않아요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if amount > int(ws.cell(row=7, column=block).value):
embed = discord.Embed(title="NO", description="매각하려는 주만큼을 보유하고 계시지 않습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if float(ti) > (time.time() - 360):
next = float(ti) + 360 - round(time.time())
embed = discord.Embed(title="NO", description="주식 거래 후 6분 동안은 추가 매매가 불가능 합니다.", color=0xeff0f1)
if next > 59:
embed.set_footer(text="다음 주식 거래 허가까지 " + str(math.floor(next/60)) + "분 " + str(round((round(next)/60 - math.floor(next/60))*60)) + "초")
else:
embed.set_footer(text="다음 주식 거래 허가까지 " + str(round(next)) + "초")
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="``케테르 경제``에 참여하지 않았어요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(stocklib + name + ".xlsx"):
wb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ws = wb.active
last = ws.cell(row=1, column=3).value
price = int(ws.cell(row=2, column=int(last)).value)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - amount)
if last == "100":
ws.cell(row=2, column=1).value = str(round(int(ws.cell(row=2, column=100).value)*(1.01 - (amount**0.2)/100 + (random.random()-0.5)/800)))
ws.cell(row=1, column=3).value = "1"
else:
ws.cell(row=2, column=int(last) + 1).value = str(round(int(ws.cell(row=2, column=int(last)).value)*(1.01 - (amount**0.2)/100 + random.random()/800)))
ws.cell(row=1, column=3).value = str(int(last) + 1)
wb.save(stocklib + name + ".xlsx")
wb.close()
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
if amount == int(ws.cell(row=7, column=block).value):
ws.cell(row=6, column=block).value = None
ws.cell(row=7, column=block).value = None
ws.cell(row=8, column=block).value = None
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + round(price*amount*0.94))
ws.cell(row=1, column=5).value = str(time.time())
else:
ws.cell(row=6, column=block).value = name
ws.cell(row=7, column=block).value = str(int(ws.cell(row=7, column=block).value) - amount)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + round(price*amount*0.96))
ws.cell(row=1, column=5).value = str(time.time())
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 주를 " + str(amount) + "주 만큼 매각하였습니다.", color=0xeff0f1)
embed.add_field(name="판매가", value=keundon(amount*price) + " <:ket:753449741186105375> (세율 4% : " + keundon(round(amount*price*0.04)) + " <:ket:753449741186105375>)")
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['최매수'])
async def 최대주식구매(self, ctx, name: str):
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
assets = int(ws.cell(row=1, column=2).value)
wb.close()
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
money = ws.cell(row=1, column=2).value
inteli = ws.cell(row=5, column=4).value
ti = ws.cell(row=1, column=5).value
wb.close()
block = 0
for i in range(1, math.ceil(int(inteli))):
if ws.cell(row=6, column=i).value == name:
block = i
start = False
elif ws.cell(row=6, column=i).value == None:
block = i
start = True
if float(ti) > (time.time() - 360):
next = float(ti) + 360 - round(time.time())
embed = discord.Embed(title="NO", description="주식 거래 후 6분 동안은 추가 매매가 불가능 합니다.", color=0xeff0f1)
if next > 59:
embed.set_footer(text="다음 주식 거래 허가까지 " + str(math.floor(next/60)) + "분 " + str(round((round(next)/60 - math.floor(next/60))*60)) + "초")
else:
embed.set_footer(text="다음 주식 거래 허가까지 " + str(round(next)) + "초")
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if block == 0:
embed = discord.Embed(title="NO", description="보유할 수 있는 주식의 종류를 넘었어요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="``케테르 경제``에 참여하지 않았어요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(stocklib + name + ".xlsx"):
wb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ws = wb.active
stocks = ws.cell(row=1, column=1).value
sold = ws.cell(row=1, column=2).value
last = ws.cell(row=1, column=3).value
price = int(ws.cell(row=2, column=int(last)).value)
amount = math.floor(assets/price)
if amount <= 0:
embed = discord.Embed(title="NO", description="매매 주는 0주 이하일 수 없습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if int(stocks) - int(sold) < amount:
embed = discord.Embed(title="NO", description="구매하려는 주가 남지 않았습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
wb.close()
return await ctx.send(embed=embed)
if int(money) < price*amount:
embed = discord.Embed(title="NO", description="돈이 부족합니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
wb.close()
return await ctx.send(embed=embed)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + amount)
if last == "100":
ws.cell(row=2, column=1).value = str(round(int(ws.cell(row=2, column=100).value)*(0.995 + (amount**0.2)/100 + (random.random()-0.5)/800)))
ws.cell(row=1, column=3).value = "1"
else:
ws.cell(row=2, column=int(last) + 1).value = str(round(int(ws.cell(row=2, column=int(last)).value)*(0.995 + (amount**0.2)/100 + random.random()/800)))
ws.cell(row=1, column=3).value = str(int(last) + 1)
wb.save(stocklib + name + ".xlsx")
wb.close()
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
ws.cell(row=6, column=block).value = name
ws.cell(row=8, column=block).value = str(price)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - price*amount)
ws.cell(row=1, column=5).value = str(time.time())
if start == True:
ws.cell(row=7, column=block).value = str(amount)
else:
ws.cell(row=7, column=block).value = str(int(ws.cell(row=7, column=block).value) + amount)
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 주를 " + str(amount) + "주 만큼 구매하였습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['전매각'])
async def 일괄주식판매(self, ctx, name: str):
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
inteli = ws.cell(row=5, column=4).value
ti = ws.cell(row=1, column=5).value
wb.close()
block = 0
for i in range(1, math.ceil(int(inteli))):
if ws.cell(row=6, column=i).value == name:
block = i
start = False
if block == 0:
embed = discord.Embed(title="NO", description="해당 이름의 주식을 보유하고 계시지 않아요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if float(ti) > (time.time() - 360):
next = float(ti) + 360 - round(time.time())
embed = discord.Embed(title="NO", description="주식 거래 후 6분 동안은 추가 매매가 불가능 합니다.", color=0xeff0f1)
if next > 59:
embed.set_footer(text="다음 주식 거래 허가까지 " + str(math.floor(next/60)) + "분 " + str(round((round(next)/60 - math.floor(next/60))*60)) + "초")
else:
embed.set_footer(text="다음 주식 거래 허가까지 " + str(round(next)) + "초")
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="``케테르 경제``에 참여하지 않았어요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(stocklib + name + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
amount = int(ws.cell(row=7, column=block).value)
wb.close()
wb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ws = wb.active
last = ws.cell(row=1, column=3).value
price = int(ws.cell(row=2, column=int(last)).value)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - amount)
if last == "100":
ws.cell(row=2, column=1).value = str(round(int(ws.cell(row=2, column=100).value)*(1.01 - (amount**0.2)/100 + (random.random()-0.5)/800)))
ws.cell(row=1, column=3).value = "1"
else:
ws.cell(row=2, column=int(last) + 1).value = str(round(int(ws.cell(row=2, column=int(last)).value)*(1.01 - (amount**0.2)/100 + random.random()/800)))
ws.cell(row=1, column=3).value = str(int(last) + 1)
wb.save(stocklib + name + ".xlsx")
wb.close()
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
ws.cell(row=6, column=block).value = None
ws.cell(row=7, column=block).value = None
ws.cell(row=8, column=block).value = None
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + round(price*amount*0.96))
ws.cell(row=1, column=5).value = str(time.time())
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 주를 " + str(amount) + "주 만큼 매각하였습니다.", color=0xeff0f1)
embed.add_field(name="판매가", value=keundon(amount*price) + " <:ket:753449741186105375> (세율 4% : " + keundon(round(amount*price*0.04)) + " <:ket:753449741186105375>)")
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['관매수'])
@commands.check(permissions.is_owner)
async def 어드민주식(self, ctx, name: str, amount: int):
if amount <= 0:
embed = discord.Embed(title="NO", description="매매 주는 0주 이하일 수 없습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
money = ws.cell(row=1, column=2).value
inteli = ws.cell(row=5, column=4).value
wb.close()
block = 0
for i in range(1, math.ceil(int(inteli))):
if ws.cell(row=6, column=i).value == name:
block = i
start = False
elif ws.cell(row=6, column=i).value == None:
block = i
start = True
if block == 0:
embed = discord.Embed(title="NO", description="보유할 수 있는 주식의 종류를 넘었어요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="``케테르 경제``에 참여하지 않았어요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(stocklib + name + ".xlsx"):
wb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ws = wb.active
stocks = ws.cell(row=1, column=1).value
sold = ws.cell(row=1, column=2).value
last = ws.cell(row=1, column=3).value
price = int(ws.cell(row=2, column=int(last)).value)
if int(stocks) - int(sold) < amount:
embed = discord.Embed(title="NO", description="구매하려는 주가 남지 않았습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
wb.close()
return await ctx.send(embed=embed)
if int(money) < price*amount:
embed = discord.Embed(title="NO", description="돈이 부족합니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
wb.close()
return await ctx.send(embed=embed)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + amount)
if last == "100":
ws.cell(row=2, column=1).value = str(round(int(ws.cell(row=2, column=100).value)*(0.995 + (amount**0.2)/100 + (random.random()-0.5)/800)))
ws.cell(row=1, column=3).value = "1"
else:
ws.cell(row=2, column=int(last) + 1).value = str(round(int(ws.cell(row=2, column=int(last)).value)*(0.995 + (amount**0.2)/100 + random.random()/800)))
ws.cell(row=1, column=3).value = str(int(last) + 1)
wb.save(stocklib + name + ".xlsx")
wb.close()
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
ws.cell(row=6, column=block).value = name
ws.cell(row=8, column=block).value = str(price)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - price*amount)
ws.cell(row=1, column=5).value = str(time.time())
if start == True:
ws.cell(row=7, column=block).value = str(amount)
else:
ws.cell(row=7, column=block).value = str(int(ws.cell(row=7, column=block).value) + amount)
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 주를 " + str(amount) + "주 만큼 구매하였습니다.", color=0xeff0f1)
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command(aliases=['관매각'])
@commands.check(permissions.is_owner)
async def 어드민주식판매(self, ctx, name: str, amount: int):
if amount <= 0:
embed = discord.Embed(title="NO", description="매매 주는 0주 이하일 수 없습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(userlib + str(ctx.author.id) + ".xlsx"):
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
inteli = ws.cell(row=5, column=4).value
wb.close()
block = 0
for i in range(1, math.ceil(int(inteli))):
if ws.cell(row=6, column=i).value == name:
block = i
start = False
if amount > int(ws.cell(row=7, column=block).value):
embed = discord.Embed(title="NO", description="매각하려는 주만큼을 보유하고 계시지 않습니다.", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if block == 0:
embed = discord.Embed(title="NO", description="해당 이름의 주식을 보유하고 계시지 않아요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
else:
embed = discord.Embed(title="NO", description="``케테르 경제``에 참여하지 않았어요..", color=0xeff0f1)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
return await ctx.send(embed=embed)
if os.path.isfile(stocklib + name + ".xlsx"):
wb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ws = wb.active
last = ws.cell(row=1, column=3).value
price = int(ws.cell(row=2, column=int(last)).value)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) - amount)
if last == "100":
ws.cell(row=2, column=1).value = str(round(int(ws.cell(row=2, column=100).value)*(1.01 - (amount**0.2)/100 + (random.random()-0.5)/800)))
ws.cell(row=1, column=3).value = "1"
else:
ws.cell(row=2, column=int(last) + 1).value = str(round(int(ws.cell(row=2, column=int(last)).value)*(1.01 - (amount**0.2)/100 + random.random()/800)))
ws.cell(row=1, column=3).value = str(int(last) + 1)
wb.save(stocklib + name + ".xlsx")
wb.close()
wb = openpyxl.load_workbook(userlib + str(ctx.author.id) + ".xlsx")
ws = wb.active
if amount == int(ws.cell(row=7, column=block).value):
ws.cell(row=6, column=block).value = None
ws.cell(row=7, column=block).value = None
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + round(price*amount*0.94))
ws.cell(row=1, column=5).value = str(time.time())
else:
ws.cell(row=6, column=block).value = name
ws.cell(row=7, column=block).value = str(int(ws.cell(row=7, column=block).value) - amount)
ws.cell(row=1, column=2).value = str(int(ws.cell(row=1, column=2).value) + round(price*amount*0.96))
ws.cell(row=1, column=5).value = str(time.time())
wb.save(userlib + str(ctx.author.id) + ".xlsx")
wb.close()
embed = discord.Embed(title="KMF", description="해당 주를 " + str(amount) + "주 만큼 매각하였습니다.", color=0xeff0f1)
embed.add_field(name="판매가", value=keundon(amount*price) + " <:ket:753449741186105375> (세율 4% : " + keundon(round(amount*price*0.04)) + " <:ket:753449741186105375>)")
embed.set_thumbnail(
url="https://cdn.discordapp.com/attachments/750540820842807396/752684853320745000/KETER_PRESTIGE.png")
await ctx.send(embed=embed)
@commands.command()
async def 테스트그래프(self, ctx):
x = np.linspace(-6, 6, 30)
y = np.linspace(-6, 6, 30)
x, y = np.meshgrid(x, y)
z = np.sin(np.sqrt(x**2 + y**2))
fig = plt.figure(figsize=(12, 6))
ax = plt.axes(projection='3d')
ax.contour3D(x, y, z, 20, cmap=plt.cm.rainbow)
plt.savefig(str(ctx.author.id) + ".png", dpi=192)
plt.title("ax.contour3D")
plt.clf()
plt.close()
await ctx.send(file=discord.File("./" + str(ctx.author.id) + ".png"))
os.remove(str(ctx.author.id) + '.png')
@commands.command()
@commands.check(permissions.is_owner)
async def 전체초기화(self, ctx):
file_list = os.listdir(userlib)
file_list = [file for file in file_list if file.endswith(".xlsx")]
for i in range(len(file_list)):
os.remove(userlib + file_list[i])
await ctx.send(file_list[i] + "deleted")
@commands.command()
@commands.check(permissions.is_owner)
async def 상장초기화(self, ctx):
file_list = os.listdir(stocklib)
file_list = [file for file in file_list if file.endswith(".xlsx")]
for i in range(len(file_list)):
os.remove(stocklib + file_list[i])
await ctx.send(file_list[i] + "deleted")
@commands.command()
@commands.check(permissions.is_owner)
async def 주가변동(self, ctx, cycle :int):
if os.path.isfile(cachelib + "is_started.ccf"):
return await ctx.send("이미 실행중입니다.")
await ctx.send("코드를 실행합니다.")
f = open(cachelib + "is_started.ccf", "w")
f.close()
file_list = os.listdir(stocklib)
file_list = [file for file in file_list if file.endswith(".xlsx")]
cycles = 0
while cycles < cycle:
cycles += 1
if not os.path.isfile(cachelib + "is_started.ccf"):
return await ctx.send(str(cycles) + "cycle stopped")
for i in range(len(file_list)):
wb = openpyxl.load_workbook(stocklib + file_list[i])
ws = wb.active
last = ws.cell(row=1, column=3).value
if last == "100":
ws.cell(row=2, column=1).value = str(round(int(ws.cell(row=2, column=100).value)*(0.995 + (random.random()-0.5)/20)))
ws.cell(row=1, column=3).value = "1"
else:
ws.cell(row=2, column=int(last) + 1).value = str(round(int(ws.cell(row=2, column=int(last)).value)*(0.995 + (random.random()-0.5)/20)))
ws.cell(row=1, column=3).value = str(int(last) + 1)
wb.save(stocklib + file_list[i])
wb.close()
if cycles == cycle:
await ctx.send("last cycle reseted")
os.remove(cachelib + "is_started.ccf")
else:
await ctx.send(str(cycles) + "cycle reseted")
await asyncio.sleep(300)
@commands.command()
@commands.check(permissions.is_owner)
async def 불황변동(self, ctx, cycle :int):
if os.path.isfile(cachelib + "is_started.ccf"):
return await ctx.send("이미 실행중입니다.")
await ctx.send("코드를 실행합니다.")
f = open(cachelib + "is_started.ccf", "w")
f.close()
file_list = os.listdir(stocklib)
file_list = [file for file in file_list if file.endswith(".xlsx")]
cycles = 0
while cycles < cycle:
cycles += 1
if not os.path.isfile(cachelib + "is_started.ccf"):
return await ctx.send(str(cycles) + "cycle stopped")
for i in range(len(file_list)):
wb = openpyxl.load_workbook(stocklib + file_list[i])
ws = wb.active
last = ws.cell(row=1, column=3).value
if last == "100":
ws.cell(row=2, column=1).value = str(round(int(ws.cell(row=2, column=100).value)*(0.96 + (random.random()-0.5)/20)))
ws.cell(row=1, column=3).value = "1"
else:
ws.cell(row=2, column=int(last) + 1).value = str(round(int(ws.cell(row=2, column=int(last)).value)*(0.96 + (random.random()-0.5)/20)))
ws.cell(row=1, column=3).value = str(int(last) + 1)
wb.save(stocklib + file_list[i])
wb.close()
if cycles == cycle:
await ctx.send("last cycle reseted")
os.remove(cachelib + "is_started.ccf")
else:
await ctx.send(str(cycles) + "cycle reseted")
await asyncio.sleep(300)
@commands.command()
@commands.check(permissions.is_owner)
async def 호황변동(self, ctx, cycle :int):
if os.path.isfile(cachelib + "is_started.ccf"):
return await ctx.send("이미 실행중입니다.")
await ctx.send("코드를 실행합니다.")
f = open(cachelib + "is_started.ccf", "w")
f.close()
file_list = os.listdir(stocklib)
file_list = [file for file in file_list if file.endswith(".xlsx")]
cycles = 0
while cycles < cycle:
cycles += 1
if not os.path.isfile(cachelib + "is_started.ccf"):
return await ctx.send(str(cycles) + "cycle stopped")
for i in range(len(file_list)):
wb = openpyxl.load_workbook(stocklib + file_list[i])
ws = wb.active
last = ws.cell(row=1, column=3).value
if last == "100":
ws.cell(row=2, column=1).value = str(round(int(ws.cell(row=2, column=100).value)*(1.005 + (random.random()-0.5)/20)))
ws.cell(row=1, column=3).value = "1"
else:
ws.cell(row=2, column=int(last) + 1).value = str(round(int(ws.cell(row=2, column=int(last)).value)*(1.005 + (random.random()-0.5)/20)))
ws.cell(row=1, column=3).value = str(int(last) + 1)
wb.save(stocklib + file_list[i])
wb.close()
if cycles == cycle:
await ctx.send("last cycle reseted")
os.remove(cachelib + "is_started.ccf")
else:
await ctx.send(str(cycles) + "cycle reseted")
await asyncio.sleep(300)
@commands.command()
@commands.check(permissions.is_owner)
async def AI변동(self, ctx, cycle :int):
if os.path.isfile(cachelib + "is_started.ccf"):
return await ctx.send("이미 실행중입니다.")
await ctx.send("추가 예정입니다.")
@commands.command()
@commands.check(permissions.is_owner)
async def 변동픽스(self, ctx):
if os.path.isfile(cachelib + "is_started.ccf"):
os.remove(cachelib + "is_started.ccf")
await ctx.send("캐시를 제거하였습니다.")
else:
await ctx.send("캐시파일이 없습니다")
@commands.command()
@commands.check(permissions.is_owner)
async def 배당시작(self, ctx):
if os.path.isfile(stocklib + "is_divided.ccf"):
return await ctx.send("이미 실행중입니다.")
await ctx.send("코드를 실행합니다.")
f = open(cachelib + "is_divided.ccf", "w")
f.close()
cycles = True
while cycles == True:
cycles = os.path.isfile(cachelib + "is_divided.ccf")
file_list = os.listdir(stocklib)
file_list = [file for file in file_list if file.endswith(".xlsx")]
for i in range(len(file_list)):
wb = openpyxl.load_workbook(stocklib + file_list[i])
ws = wb.active
ws.cell(row=1, column=4).value = str(round(float(ws.cell(row=1, column=4).value) * (1 + (random.random() - 0.48)/32)))
wb.save(stocklib + file_list[i])
wb.close()
await ctx.send("all the stocks have been reseted")
file_list = os.listdir(userlib)
file_list = [file for file in file_list if file.endswith(".xlsx")]
for i in range(len(file_list)):
wb = openpyxl.load_workbook(userlib + file_list[i])
ws = wb.active
money = float(ws.cell(row=1, column=2).value)
inteli = ws.cell(row=5, column=4).value
for j in range(1, math.ceil(int(inteli))):
name = ws.cell(row=6, column=j).value
try:
amount = int(ws.cell(row=7, column=j).value)
except:
continue
sb = openpyxl.load_workbook(stocklib + name + ".xlsx")
ss = sb.active
percap = int(ss.cell(row=1, column=4).value) / int(ss.cell(row=1, column=1).value)
money += amount*percap
sb.close()
ws.cell(row=1, column=2).value = str(round(money))
wb.save(userlib + file_list[i])
wb.close()
await ctx.send("all the user got the dividend")
await asyncio.sleep(86400)
@commands.command()
@commands.check(permissions.is_owner)
async def 배당픽스(self, ctx):
if os.path.isfile(cachelib + "is_divided.ccf"):
os.remove(cachelib + "is_divided.ccf")
await ctx.send("캐시를 제거하였습니다.")
else:
await ctx.send("캐시파일이 없습니다")
@commands.command()
@commands.check(permissions.is_owner)
async def 할양초기화(self, ctx):
file_list = os.listdir(stocklib)
file_list = [file for file in file_list if file.endswith(".xlsx")]
for i in range(len(file_list)):
wb = openpyxl.load_workbook(stocklib + file_list[i])
ws = wb.active
stocks = ws.cell(row=1, column=2).value = "0"
wb.save(stocklib + file_list[i])
wb.close()
await ctx.send(file_list[i] + " reseted")
def setup(bot):
bot.add_cog(economy_ko(bot))
| 56.808696 | 377 | 0.552222 | 9,945 | 78,396 | 4.320463 | 0.061237 | 0.054414 | 0.069123 | 0.058324 | 0.906184 | 0.892871 | 0.878698 | 0.855005 | 0.831149 | 0.812112 | 0 | 0.091041 | 0.297911 | 78,396 | 1,379 | 378 | 56.849891 | 0.68945 | 0.003355 | 0 | 0.75358 | 0 | 0 | 0.147354 | 0.00901 | 0 | 0 | 0.010048 | 0 | 0 | 1 | 0.003014 | false | 0.000754 | 0.01055 | 0 | 0.067069 | 0.001507 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
963c12d2c8e4646cde2df8a3f6c12a19aca113c9 | 21,133 | py | Python | vegadns_cli/commands/default_records.py | shupp/VegaDNS-CLI | d2b4cc41649f3f549774ebf206302841f3db41d0 | [
"Apache-2.0"
] | 3 | 2017-10-03T23:11:20.000Z | 2021-07-19T17:06:26.000Z | vegadns_cli/commands/default_records.py | shupp/VegaDNS-CLI | d2b4cc41649f3f549774ebf206302841f3db41d0 | [
"Apache-2.0"
] | null | null | null | vegadns_cli/commands/default_records.py | shupp/VegaDNS-CLI | d2b4cc41649f3f549774ebf206302841f3db41d0 | [
"Apache-2.0"
] | 5 | 2017-06-13T04:34:41.000Z | 2022-02-04T05:35:35.000Z | from builtins import str
import click
import json
import logging
from vegadns_client.exceptions import ClientException
from vegadns_cli.common import default_records
logger = logging.getLogger(__name__)
@default_records.command()
@click.pass_context
def list(ctx):
"""List default records"""
try:
collection = ctx.obj['client'].default_records()
default_records = []
for default_record in collection:
default_records.append(default_record.values)
click.echo(json.dumps(default_records, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--ip",
type=str,
prompt=True,
help="IPv4 address of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.option(
"--record-id",
type=int,
prompt=True,
help="ID of the default record to edit, required"
)
@click.pass_context
def edit_a(ctx, record_id, name, ip, ttl=3600):
"""Edit a default A record"""
try:
data = {
"record_type": "A",
"name": name,
"value": ip,
"ttl": ttl
}
record = ctx.obj['client'].default_record(record_id)
r = record.edit(data)
click.echo(json.dumps(r.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--ip",
type=str,
prompt=True,
help="IPv4 address of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.pass_context
def create_a(ctx, name, ip, ttl=3600):
"""Create a default A record"""
try:
data = {
"record_type": "A",
"name": name,
"value": ip,
"ttl": ttl
}
record = ctx.obj['client'].default_records.create(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL, defaults to 3600"
)
@click.option(
"--ip",
type=str,
prompt=True,
help="IPv6 address of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.option(
"--record-id",
type=int,
prompt=True,
help="ID of the default record to edit, required"
)
@click.pass_context
def edit_aaaa(ctx, record_id, name, ip, ttl=3600):
"""Edit default AAAA record"""
try:
data = {
"name": name,
"value": ip,
"ttl": ttl
}
r = ctx.obj['client'].default_record(record_id)
record = r.edit(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record to create, defaults to 3600"
)
@click.option(
"--ip",
type=str,
prompt=True,
help="IPv6 address of the default record to create, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record to create, required"
)
@click.pass_context
def create_aaaa(ctx, name, ip, ttl=3600):
"""Create default AAAA record"""
try:
data = {
"record_type": "AAAA",
"name": name,
"value": ip,
"ttl": ttl
}
record = ctx.obj['client'].default_records.create(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.option(
"--record-id",
type=int,
prompt=True,
help="ID of the default record to edit, required"
)
@click.pass_context
def edit_cname(ctx, record_id, name, value, ttl=3600):
"""Edit default CNAME record"""
try:
data = {
"name": name,
"value": value,
"ttl": ttl
}
r = ctx.obj['client'].default_record(record_id)
record = r.edit(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record to create, defaults to 3600"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record to create, required"
)
@click.pass_context
def create_cname(ctx, name, value, ttl=3600):
"""Create default CNAME record"""
try:
data = {
"record_type": "CNAME",
"name": name,
"value": value,
"ttl": ttl
}
record = ctx.obj['client'].default_records.create(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.option(
"--record-id",
type=int,
prompt=True,
help="ID of the default record, required"
)
@click.pass_context
def edit_ns(ctx, record_id, name, value, ttl=3600):
"""Edit default NS record"""
try:
data = {
"name": name,
"value": value,
"ttl": ttl
}
r = ctx.obj['client'].default_record(record_id)
record = r.edit(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record to create, defaults to 3600"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record to create, required"
)
@click.pass_context
def create_ns(ctx, name, value, ttl=3600):
"""Create default NS record"""
try:
data = {
"record_type": "NS",
"name": name,
"value": value,
"ttl": ttl
}
record = ctx.obj['client'].default_records.create(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.option(
"--record-id",
type=int,
prompt=True,
help="ID of the default record to edit, required"
)
@click.pass_context
def edit_txt(ctx, record_id, name, value, ttl=3600):
"""Edit default TXT record"""
try:
data = {
"name": name,
"value": value,
"ttl": ttl
}
r = ctx.obj['client'].default_record(record_id)
record = r.edit(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.pass_context
def create_txt(ctx, name, value, ttl=3600):
"""Create default TXT record"""
try:
data = {
"record_type": "TXT",
"name": name,
"value": value,
"ttl": ttl
}
record = ctx.obj['client'].default_records.create(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--distance",
type=int,
prompt=False,
help="Distance of the default record, defaults to 0"
)
@click.option(
"--port",
type=int,
prompt=True,
help="Port of the default record, required"
)
@click.option(
"--weight",
type=int,
prompt=True,
help="Weight of the default record, required"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.option(
"--record-id",
type=int,
prompt=True,
help="ID of the default record, required"
)
@click.pass_context
def edit_srv(ctx, record_id, name, value, weight, port, distance=0, ttl=3600):
"""Edit default SRV record"""
try:
data = {
"name": name,
"value": value,
"weight": weight,
"port": port,
"distance": distance,
"ttl": ttl
}
r = ctx.obj['client'].default_record(record_id)
record = r.edit(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--distance",
type=int,
prompt=False,
help="Distance of the default record, defaults to 0"
)
@click.option(
"--port",
type=int,
prompt=True,
help="Port of the default record, required"
)
@click.option(
"--weight",
type=int,
prompt=True,
help="Weight of the default record, required"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.pass_context
def create_srv(ctx, name, value, weight, port, distance=0, ttl=3600):
"""Create default SRV record"""
try:
data = {
"record_type": "SRV",
"name": name,
"value": value,
"weight": weight,
"port": port,
"distance": distance,
"ttl": ttl
}
record = ctx.obj['client'].default_records.create(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.option(
"--record-id",
type=int,
prompt=True,
help="ID of the default record, required"
)
@click.pass_context
def edit_spf(ctx, record_id, name, value, ttl=3600):
"""Edit default SPF record"""
try:
data = {
"name": name,
"value": value,
"ttl": ttl
}
r = ctx.obj['client'].default_record(record_id)
record = r.edit(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.pass_context
def create_spf(ctx, name, value, ttl=3600):
"""Create default SPF record"""
try:
data = {
"record_type": "SPF",
"name": name,
"value": value,
"ttl": ttl
}
record = ctx.obj['client'].default_records.create(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL, defaults to 86400"
)
@click.option(
"--serial",
type=int,
prompt=False,
help="Custom serial number, defaults to none (autogenerated)"
)
@click.option(
"--minimum",
type=int,
prompt=False,
help="Minimum ttl, defaults to 2560"
)
@click.option(
"--expire",
type=int,
prompt=False,
help="Expire time, defaults to 1048576"
)
@click.option(
"--retry",
type=int,
prompt=False,
help="Retry time, defaults to 2048"
)
@click.option(
"--refresh",
type=int,
prompt=False,
help="Refresh time, defaults to 16374"
)
@click.option(
"--nameserver",
type=str,
prompt=True,
help="Authority name server, i.e. ns1.example.com, required"
)
@click.option(
"--email",
type=str,
prompt=True,
help="Domain contact, i.e. hostmaster.example.com, required"
)
@click.option(
"--record-id",
type=int,
prompt=True,
help="ID of the SOA record to edit, required"
)
@click.pass_context
def edit_soa(ctx, record_id, email, nameserver, refresh=16374, retry=2048,
expire=1048576, minimum=2560, serial=None, ttl=86400):
"""Edit default SOA record"""
try:
data = {
"email": email,
"nameserver": nameserver,
"refresh": refresh,
"retry": retry,
"expire": expire,
"minimum": minimum,
"serial": serial,
"ttl": ttl
}
r = ctx.obj['client'].default_record(record_id)
record = r.edit(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record to create, defaults to 86400"
)
@click.option(
"--serial",
type=int,
prompt=False,
help="Custom serial number, defaults to none (autogenerated)"
)
@click.option(
"--minimum",
type=int,
prompt=False,
help="Minimum ttl, defaults to 2560"
)
@click.option(
"--expire",
type=int,
prompt=False,
help="Expire time, defaults to 1048576"
)
@click.option(
"--retry",
type=int,
prompt=False,
help="Retry time, defaults to 2048"
)
@click.option(
"--refresh",
type=int,
prompt=False,
help="Refresh time, defaults to 16374"
)
@click.option(
"--nameserver",
type=str,
prompt=True,
help="Authority name server, i.e. ns1.example.com, required"
)
@click.option(
"--email",
type=str,
prompt=True,
help="Domain contact, i.e. hostmaster.example.com, required"
)
@click.pass_context
def create_soa(ctx, email, nameserver, refresh=16374, retry=2048,
expire=1048576, minimum=2560, serial=None, ttl=86400):
"""Create default SOA record (limited to one)"""
try:
data = {
"record_type": "SOA",
"email": email,
"nameserver": nameserver,
"refresh": refresh,
"retry": retry,
"expire": expire,
"minimum": minimum,
"serial": serial,
"ttl": ttl
}
record = ctx.obj['client'].default_records.create(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--distance",
type=int,
prompt=False,
help="Distance of the default record, defaults to 0"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.option(
"--record-id",
type=int,
prompt=True,
help="ID of the default record, required"
)
@click.pass_context
def edit_mx(ctx, record_id, name, value, distance=0, ttl=3600):
"""Edit default MX record"""
try:
data = {
"name": name,
"value": value,
"distance": distance,
"ttl": ttl
}
r = ctx.obj['client'].default_record(record_id)
record = r.edit(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--ttl",
type=int,
prompt=False,
help="TTL of the default record, defaults to 3600"
)
@click.option(
"--distance",
type=int,
prompt=False,
help="Distance of the default record, defaults to 0"
)
@click.option(
"--value",
type=str,
prompt=True,
help="Value of the default record, required"
)
@click.option(
"--name",
type=str,
prompt=True,
help="Hostname of the default record, required"
)
@click.pass_context
def create_mx(ctx, name, value, distance=0, ttl=3600):
"""Create default MX record"""
try:
data = {
"record_type": "MX",
"name": name,
"value": value,
"distance": distance,
"ttl": ttl
}
record = ctx.obj['client'].default_records.create(data)
click.echo(json.dumps(record.values, indent=4))
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
@default_records.command()
@click.option(
"--record-id",
type=int,
prompt=True,
help="Record id"
)
@click.pass_context
def delete(ctx, record_id):
"""Delete a default record"""
try:
r = ctx.obj['client'].default_record(record_id)
r.delete()
except ClientException as e:
click.echo("Error: " + str(e.code))
click.echo("Response: " + str(e.message))
ctx.exit(1)
| 23.612291 | 78 | 0.583258 | 2,610 | 21,133 | 4.676628 | 0.042912 | 0.073898 | 0.06292 | 0.09438 | 0.949369 | 0.927495 | 0.909798 | 0.897182 | 0.879158 | 0.860806 | 0 | 0.01817 | 0.268206 | 21,133 | 894 | 79 | 23.638702 | 0.771096 | 0.024275 | 0 | 0.818291 | 0 | 0 | 0.235587 | 0.002242 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024067 | false | 0.024067 | 0.00722 | 0 | 0.031288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
967bfa3f14c7e8cffff71f2650e58087c8f4295d | 84,067 | py | Python | release/stubs/System/Net/Security.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 182 | 2017-06-27T02:26:15.000Z | 2022-03-30T18:53:43.000Z | release/stubs/System/Net/Security.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 28 | 2017-06-27T13:38:23.000Z | 2022-03-15T11:19:44.000Z | release/stubs/System/Net/Security.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 67 | 2017-06-28T09:43:59.000Z | 2022-03-20T21:17:10.000Z | # encoding: utf-8
# module System.Net.Security calls itself Security
# from System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
# by generator 1.145
""" NamespaceTracker represent a CLS namespace. """
# no imports
# no functions
# classes
class AuthenticatedStream(Stream, IDisposable):
""" Provides methods for passing credentials across a stream and requesting or performing authentication for client-server applications. """
def CreateWaitHandle(self, *args): #cannot find CLR method
"""
CreateWaitHandle(self: Stream) -> WaitHandle
Allocates a System.Threading.WaitHandle object.
Returns: A reference to the allocated WaitHandle.
"""
pass
def Dispose(self):
"""
Dispose(self: AuthenticatedStream, disposing: bool)
Releases the unmanaged resources used by the System.Net.Security.AuthenticatedStream and
optionally releases the managed resources.
disposing: true to release both managed and unmanaged resources; false to release only unmanaged resources.
"""
pass
def MemberwiseClone(self, *args): #cannot find CLR method
"""
MemberwiseClone(self: MarshalByRefObject, cloneIdentity: bool) -> MarshalByRefObject
Creates a shallow copy of the current System.MarshalByRefObject object.
cloneIdentity: false to delete the current System.MarshalByRefObject object's identity, which will cause the
object to be assigned a new identity when it is marshaled across a remoting boundary. A value of
false is usually appropriate. true to copy the current System.MarshalByRefObject object's
identity to its clone, which will cause remoting client calls to be routed to the remote server
object.
Returns: A shallow copy of the current System.MarshalByRefObject object.
MemberwiseClone(self: object) -> object
Creates a shallow copy of the current System.Object.
Returns: A shallow copy of the current System.Object.
"""
pass
def ObjectInvariant(self, *args): #cannot find CLR method
"""
ObjectInvariant(self: Stream)
Provides support for a System.Diagnostics.Contracts.Contract.
"""
pass
def __enter__(self, *args): #cannot find CLR method
"""
__enter__(self: IDisposable) -> object
Provides the implementation of __enter__ for objects which implement IDisposable.
"""
pass
def __exit__(self, *args): #cannot find CLR method
"""
__exit__(self: IDisposable, exc_type: object, exc_value: object, exc_back: object)
Provides the implementation of __exit__ for objects which implement IDisposable.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, *args): #cannot find CLR constructor
""" __new__(cls: type, innerStream: Stream, leaveInnerStreamOpen: bool) """
pass
InnerStream = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the stream used by this System.Net.Security.AuthenticatedStream for sending and receiving data.
"""
IsAuthenticated = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether authentication was successful.
Get: IsAuthenticated(self: AuthenticatedStream) -> bool
"""
IsEncrypted = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether data sent using this System.Net.Security.AuthenticatedStream is encrypted.
Get: IsEncrypted(self: AuthenticatedStream) -> bool
"""
IsMutuallyAuthenticated = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether both server and client have been authenticated.
Get: IsMutuallyAuthenticated(self: AuthenticatedStream) -> bool
"""
IsServer = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the local side of the connection was authenticated as the server.
Get: IsServer(self: AuthenticatedStream) -> bool
"""
IsSigned = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the data sent using this stream is signed.
Get: IsSigned(self: AuthenticatedStream) -> bool
"""
LeaveInnerStreamOpen = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets whether the stream used by this System.Net.Security.AuthenticatedStream for sending and receiving data has been left open.
Get: LeaveInnerStreamOpen(self: AuthenticatedStream) -> bool
"""
class AuthenticationLevel(Enum, IComparable, IFormattable, IConvertible):
"""
Specifies client requirements for authentication and impersonation when using the System.Net.WebRequest class and derived classes to request a resource.
enum AuthenticationLevel, values: MutualAuthRequested (1), MutualAuthRequired (2), None (0)
"""
def __eq__(self, *args): #cannot find CLR method
""" x.__eq__(y) <==> x==yx.__eq__(y) <==> x==yx.__eq__(y) <==> x==y """
pass
def __format__(self, *args): #cannot find CLR method
""" __format__(formattable: IFormattable, format: str) -> str """
pass
def __ge__(self, *args): #cannot find CLR method
pass
def __gt__(self, *args): #cannot find CLR method
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __le__(self, *args): #cannot find CLR method
pass
def __lt__(self, *args): #cannot find CLR method
pass
def __ne__(self, *args): #cannot find CLR method
pass
def __reduce_ex__(self, *args): #cannot find CLR method
pass
def __str__(self, *args): #cannot find CLR method
pass
MutualAuthRequested = None
MutualAuthRequired = None
None = None
value__ = None
class EncryptionPolicy(Enum, IComparable, IFormattable, IConvertible):
"""
The EncryptionPolicy to use.
enum EncryptionPolicy, values: AllowNoEncryption (1), NoEncryption (2), RequireEncryption (0)
"""
def __eq__(self, *args): #cannot find CLR method
""" x.__eq__(y) <==> x==yx.__eq__(y) <==> x==yx.__eq__(y) <==> x==y """
pass
def __format__(self, *args): #cannot find CLR method
""" __format__(formattable: IFormattable, format: str) -> str """
pass
def __ge__(self, *args): #cannot find CLR method
pass
def __gt__(self, *args): #cannot find CLR method
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __le__(self, *args): #cannot find CLR method
pass
def __lt__(self, *args): #cannot find CLR method
pass
def __ne__(self, *args): #cannot find CLR method
pass
def __reduce_ex__(self, *args): #cannot find CLR method
pass
def __str__(self, *args): #cannot find CLR method
pass
AllowNoEncryption = None
NoEncryption = None
RequireEncryption = None
value__ = None
class LocalCertificateSelectionCallback(MulticastDelegate, ICloneable, ISerializable):
"""
Selects the local Secure Sockets Layer (SSL) certificate used for authentication.
LocalCertificateSelectionCallback(object: object, method: IntPtr)
"""
def BeginInvoke(self, sender, targetHost, localCertificates, remoteCertificate, acceptableIssuers, callback, object):
""" BeginInvoke(self: LocalCertificateSelectionCallback, sender: object, targetHost: str, localCertificates: X509CertificateCollection, remoteCertificate: X509Certificate, acceptableIssuers: Array[str], callback: AsyncCallback, object: object) -> IAsyncResult """
pass
def CombineImpl(self, *args): #cannot find CLR method
"""
CombineImpl(self: MulticastDelegate, follow: Delegate) -> Delegate
Combines this System.Delegate with the specified System.Delegate to form a new delegate.
follow: The delegate to combine with this delegate.
Returns: A delegate that is the new root of the System.MulticastDelegate invocation list.
"""
pass
def DynamicInvokeImpl(self, *args): #cannot find CLR method
"""
DynamicInvokeImpl(self: Delegate, args: Array[object]) -> object
Dynamically invokes (late-bound) the method represented by the current delegate.
args: An array of objects that are the arguments to pass to the method represented by the current
delegate.-or- null, if the method represented by the current delegate does not require
arguments.
Returns: The object returned by the method represented by the delegate.
"""
pass
def EndInvoke(self, result):
""" EndInvoke(self: LocalCertificateSelectionCallback, result: IAsyncResult) -> X509Certificate """
pass
def GetMethodImpl(self, *args): #cannot find CLR method
"""
GetMethodImpl(self: MulticastDelegate) -> MethodInfo
Returns a static method represented by the current System.MulticastDelegate.
Returns: A static method represented by the current System.MulticastDelegate.
"""
pass
def Invoke(self, sender, targetHost, localCertificates, remoteCertificate, acceptableIssuers):
""" Invoke(self: LocalCertificateSelectionCallback, sender: object, targetHost: str, localCertificates: X509CertificateCollection, remoteCertificate: X509Certificate, acceptableIssuers: Array[str]) -> X509Certificate """
pass
def RemoveImpl(self, *args): #cannot find CLR method
"""
RemoveImpl(self: MulticastDelegate, value: Delegate) -> Delegate
Removes an element from the invocation list of this System.MulticastDelegate that is equal to
the specified delegate.
value: The delegate to search for in the invocation list.
Returns: If value is found in the invocation list for this instance, then a new System.Delegate without
value in its invocation list; otherwise, this instance with its original invocation list.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, object, method):
""" __new__(cls: type, object: object, method: IntPtr) """
pass
def __reduce_ex__(self, *args): #cannot find CLR method
pass
class NegotiateStream(AuthenticatedStream, IDisposable):
"""
Provides a stream that uses the Negotiate security protocol to authenticate the client, and optionally the server, in client-server communication.
NegotiateStream(innerStream: Stream)
NegotiateStream(innerStream: Stream, leaveInnerStreamOpen: bool)
"""
def AuthenticateAsClient(self, credential=None, *__args):
"""
AuthenticateAsClient(self: NegotiateStream, credential: NetworkCredential, targetName: str, requiredProtectionLevel: ProtectionLevel, allowedImpersonationLevel: TokenImpersonationLevel)
Called by clients to authenticate the client, and optionally the server, in a client-server
connection. The authentication process uses the specified credentials and authentication
options.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
targetName: The Service Principal Name (SPN) that uniquely identifies the server to authenticate.
requiredProtectionLevel: One of the System.Net.Security.ProtectionLevel values, indicating the security services for the
stream.
allowedImpersonationLevel: One of the System.Security.Principal.TokenImpersonationLevel values, indicating how the server
can use the client's credentials to access resources.
AuthenticateAsClient(self: NegotiateStream, credential: NetworkCredential, binding: ChannelBinding, targetName: str, requiredProtectionLevel: ProtectionLevel, allowedImpersonationLevel: TokenImpersonationLevel)
Called by clients to authenticate the client, and optionally the server, in a client-server
connection. The authentication process uses the specified credential, authentication options,
and channel binding.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
binding: The System.Security.Authentication.ExtendedProtection.ChannelBinding that is used for extended
protection.
targetName: The Service Principal Name (SPN) that uniquely identifies the server to authenticate.
requiredProtectionLevel: One of the System.Net.Security.ProtectionLevel values, indicating the security services for the
stream.
allowedImpersonationLevel: One of the System.Security.Principal.TokenImpersonationLevel values, indicating how the server
can use the client's credentials to access resources.
AuthenticateAsClient(self: NegotiateStream, credential: NetworkCredential, binding: ChannelBinding, targetName: str)
Called by clients to authenticate the client, and optionally the server, in a client-server
connection. The authentication process uses the specified client credential and the channel
binding.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
binding: The System.Security.Authentication.ExtendedProtection.ChannelBinding that is used for extended
protection.
targetName: The Service Principal Name (SPN) that uniquely identifies the server to authenticate.
AuthenticateAsClient(self: NegotiateStream)
Called by clients to authenticate the client, and optionally the server, in a client-server
connection.
AuthenticateAsClient(self: NegotiateStream, credential: NetworkCredential, targetName: str)
Called by clients to authenticate the client, and optionally the server, in a client-server
connection. The authentication process uses the specified client credential.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
targetName: The Service Principal Name (SPN) that uniquely identifies the server to authenticate.
"""
pass
def AuthenticateAsClientAsync(self, credential=None, *__args):
"""
AuthenticateAsClientAsync(self: NegotiateStream, credential: NetworkCredential, binding: ChannelBinding, targetName: str) -> Task
AuthenticateAsClientAsync(self: NegotiateStream, credential: NetworkCredential, binding: ChannelBinding, targetName: str, requiredProtectionLevel: ProtectionLevel, allowedImpersonationLevel: TokenImpersonationLevel) -> Task
AuthenticateAsClientAsync(self: NegotiateStream, credential: NetworkCredential, targetName: str, requiredProtectionLevel: ProtectionLevel, allowedImpersonationLevel: TokenImpersonationLevel) -> Task
AuthenticateAsClientAsync(self: NegotiateStream) -> Task
AuthenticateAsClientAsync(self: NegotiateStream, credential: NetworkCredential, targetName: str) -> Task
"""
pass
def AuthenticateAsServer(self, *__args):
"""
AuthenticateAsServer(self: NegotiateStream, credential: NetworkCredential, requiredProtectionLevel: ProtectionLevel, requiredImpersonationLevel: TokenImpersonationLevel)
Called by servers to authenticate the client, and optionally the server, in a client-server
connection. The authentication process uses the specified server credentials and authentication
options.
credential: The System.Net.NetworkCredential that is used to establish the identity of the server.
requiredProtectionLevel: One of the System.Net.Security.ProtectionLevel values, indicating the security services for the
stream.
requiredImpersonationLevel: One of the System.Security.Principal.TokenImpersonationLevel values, indicating how the server
can use the client's credentials to access resources.
AuthenticateAsServer(self: NegotiateStream, credential: NetworkCredential, policy: ExtendedProtectionPolicy, requiredProtectionLevel: ProtectionLevel, requiredImpersonationLevel: TokenImpersonationLevel)
Called by servers to authenticate the client, and optionally the server, in a client-server
connection. The authentication process uses the specified server credentials, authentication
options, and extended protection policy.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
policy: The System.Security.Authentication.ExtendedProtection.ExtendedProtectionPolicy that is used for
extended protection.
requiredProtectionLevel: One of the System.Net.Security.ProtectionLevel values, indicating the security services for the
stream.
requiredImpersonationLevel: One of the System.Security.Principal.TokenImpersonationLevel values, indicating how the server
can use the client's credentials to access resources.
AuthenticateAsServer(self: NegotiateStream)
Called by servers to authenticate the client, and optionally the server, in a client-server
connection.
AuthenticateAsServer(self: NegotiateStream, policy: ExtendedProtectionPolicy)
Called by servers to authenticate the client, and optionally the server, in a client-server
connection. The authentication process uses the specified extended protection policy.
policy: The System.Security.Authentication.ExtendedProtection.ExtendedProtectionPolicy that is used for
extended protection.
"""
pass
def AuthenticateAsServerAsync(self, *__args):
"""
AuthenticateAsServerAsync(self: NegotiateStream, credential: NetworkCredential, requiredProtectionLevel: ProtectionLevel, requiredImpersonationLevel: TokenImpersonationLevel) -> Task
AuthenticateAsServerAsync(self: NegotiateStream, credential: NetworkCredential, policy: ExtendedProtectionPolicy, requiredProtectionLevel: ProtectionLevel, requiredImpersonationLevel: TokenImpersonationLevel) -> Task
AuthenticateAsServerAsync(self: NegotiateStream) -> Task
AuthenticateAsServerAsync(self: NegotiateStream, policy: ExtendedProtectionPolicy) -> Task
"""
pass
def BeginAuthenticateAsClient(self, *__args):
"""
BeginAuthenticateAsClient(self: NegotiateStream, credential: NetworkCredential, targetName: str, requiredProtectionLevel: ProtectionLevel, allowedImpersonationLevel: TokenImpersonationLevel, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by clients to begin an asynchronous operation to authenticate the client, and optionally
the server, in a client-server connection. The authentication process uses the specified
credentials and authentication options. This method does not block.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
targetName: The Service Principal Name (SPN) that uniquely identifies the server to authenticate.
requiredProtectionLevel: One of the System.Net.Security.ProtectionLevel values, indicating the security services for the
stream.
allowedImpersonationLevel: One of the System.Security.Principal.TokenImpersonationLevel values, indicating how the server
can use the client's credentials to access resources.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object containing information about the write operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
BeginAuthenticateAsClient(self: NegotiateStream, credential: NetworkCredential, binding: ChannelBinding, targetName: str, requiredProtectionLevel: ProtectionLevel, allowedImpersonationLevel: TokenImpersonationLevel, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by clients to begin an asynchronous operation to authenticate the client, and optionally
the server, in a client-server connection. The authentication process uses the specified
credentials, authentication options, and channel binding. This method does not block.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
binding: The System.Security.Authentication.ExtendedProtection.ChannelBinding that is used for extended
protection.
targetName: The Service Principal Name (SPN) that uniquely identifies the server to authenticate.
requiredProtectionLevel: One of the System.Net.Security.ProtectionLevel values, indicating the security services for the
stream.
allowedImpersonationLevel: One of the System.Security.Principal.TokenImpersonationLevel values, indicating how the server
can use the client's credentials to access resources.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object containing information about the write operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
BeginAuthenticateAsClient(self: NegotiateStream, credential: NetworkCredential, binding: ChannelBinding, targetName: str, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by clients to begin an asynchronous operation to authenticate the client, and optionally
the server, in a client-server connection. The authentication process uses the specified
credentials and channel binding. This method does not block.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
binding: The System.Security.Authentication.ExtendedProtection.ChannelBinding that is used for extended
protection.
targetName: The Service Principal Name (SPN) that uniquely identifies the server to authenticate.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object containing information about the write operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
BeginAuthenticateAsClient(self: NegotiateStream, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by clients to begin an asynchronous operation to authenticate the client, and optionally
the server, in a client-server connection. This method does not block.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object containing information about the operation. This object is passed to the
asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
BeginAuthenticateAsClient(self: NegotiateStream, credential: NetworkCredential, targetName: str, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by clients to begin an asynchronous operation to authenticate the client, and optionally
the server, in a client-server connection. The authentication process uses the specified
credentials. This method does not block.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
targetName: The Service Principal Name (SPN) that uniquely identifies the server to authenticate.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object containing information about the write operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
"""
pass
def BeginAuthenticateAsServer(self, *__args):
"""
BeginAuthenticateAsServer(self: NegotiateStream, credential: NetworkCredential, requiredProtectionLevel: ProtectionLevel, requiredImpersonationLevel: TokenImpersonationLevel, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by servers to begin an asynchronous operation to authenticate the client, and optionally
the server, in a client-server connection. The authentication process uses the specified server
credentials and authentication options. This method does not block.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
requiredProtectionLevel: One of the System.Net.Security.ProtectionLevel values, indicating the security services for the
stream.
requiredImpersonationLevel: One of the System.Security.Principal.TokenImpersonationLevel values, indicating how the server
can use the client's credentials to access resources.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object containing information about the operation. This object is passed to the
asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
BeginAuthenticateAsServer(self: NegotiateStream, credential: NetworkCredential, policy: ExtendedProtectionPolicy, requiredProtectionLevel: ProtectionLevel, requiredImpersonationLevel: TokenImpersonationLevel, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by servers to begin an asynchronous operation to authenticate the client, and optionally
the server, in a client-server connection. The authentication process uses the specified server
credentials, authentication options, and extended protection policy. This method does not block.
credential: The System.Net.NetworkCredential that is used to establish the identity of the client.
policy: The System.Security.Authentication.ExtendedProtection.ExtendedProtectionPolicy that is used for
extended protection.
requiredProtectionLevel: One of the System.Net.Security.ProtectionLevel values, indicating the security services for the
stream.
requiredImpersonationLevel: One of the System.Security.Principal.TokenImpersonationLevel values, indicating how the server
can use the client's credentials to access resources.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object containing information about the write operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
BeginAuthenticateAsServer(self: NegotiateStream, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by servers to begin an asynchronous operation to authenticate the client, and optionally
the server, in a client-server connection. This method does not block.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object containing information about the operation. This object is passed to the
asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
BeginAuthenticateAsServer(self: NegotiateStream, policy: ExtendedProtectionPolicy, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by servers to begin an asynchronous operation to authenticate the client, and optionally
the server, in a client-server connection. The authentication process uses the specified
extended protection policy. This method does not block.
policy: The System.Security.Authentication.ExtendedProtection.ExtendedProtectionPolicy that is used for
extended protection.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object containing information about the write operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
"""
pass
def BeginRead(self, buffer, offset, count, asyncCallback, asyncState):
"""
BeginRead(self: NegotiateStream, buffer: Array[Byte], offset: int, count: int, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Begins an asynchronous read operation that reads data from the stream and stores it in the
specified array.
buffer: A System.Byte array that receives the bytes read from the stream.
offset: The zero-based location in buffer at which to begin storing the data read from this stream.
count: The maximum number of bytes to read from the stream.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the read operation is
complete.
asyncState: A user-defined object containing information about the read operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
"""
pass
def BeginWrite(self, buffer, offset, count, asyncCallback, asyncState):
"""
BeginWrite(self: NegotiateStream, buffer: Array[Byte], offset: int, count: int, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Begins an asynchronous write operation that writes System.Bytes from the specified buffer to the
stream.
buffer: A System.Byte array that supplies the bytes to be written to the stream.
offset: The zero-based location in buffer at which to begin reading bytes to be written to the stream.
count: An System.Int32 value that specifies the number of bytes to read from buffer.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the write operation
is complete.
asyncState: A user-defined object containing information about the write operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
"""
pass
def CreateWaitHandle(self, *args): #cannot find CLR method
"""
CreateWaitHandle(self: Stream) -> WaitHandle
Allocates a System.Threading.WaitHandle object.
Returns: A reference to the allocated WaitHandle.
"""
pass
def Dispose(self):
"""
Dispose(self: NegotiateStream, disposing: bool)
Releases the unmanaged resources used by the System.Net.Security.NegotiateStream and optionally
releases the managed resources.
disposing: true to release both managed and unmanaged resources; false to release only unmanaged resources.
"""
pass
def EndAuthenticateAsClient(self, asyncResult):
"""
EndAuthenticateAsClient(self: NegotiateStream, asyncResult: IAsyncResult)
Ends a pending asynchronous client authentication operation that was started with a call to
erload:System.Net.Security.NegotiateStream.BeginAuthenticateAsClient.
asyncResult: An System.IAsyncResult instance returned by a call to
erload:System.Net.Security.NegotiateStream.BeginAuthenticateAsClient.
"""
pass
def EndAuthenticateAsServer(self, asyncResult):
"""
EndAuthenticateAsServer(self: NegotiateStream, asyncResult: IAsyncResult)
Ends a pending asynchronous client authentication operation that was started with a call to
erload:System.Net.Security.NegotiateStream.BeginAuthenticateAsServer.
asyncResult: An System.IAsyncResult instance returned by a call to
erload:System.Net.Security.NegotiateStream.BeginAuthenticateAsServer.
"""
pass
def EndRead(self, asyncResult):
"""
EndRead(self: NegotiateStream, asyncResult: IAsyncResult) -> int
Ends an asynchronous read operation that was started with a call to
System.Net.Security.NegotiateStream.BeginRead(System.Byte[],System.Int32,System.Int32,System.Asyn
cCallback,System.Object).
asyncResult: An System.IAsyncResult instance returned by a call to
System.Net.Security.NegotiateStream.BeginRead(System.Byte[],System.Int32,System.Int32,System.Asyn
cCallback,System.Object)
Returns: A System.Int32 value that specifies the number of bytes read from the underlying stream.
"""
pass
def EndWrite(self, asyncResult):
"""
EndWrite(self: NegotiateStream, asyncResult: IAsyncResult)
Ends an asynchronous write operation that was started with a call to
System.Net.Security.NegotiateStream.BeginWrite(System.Byte[],System.Int32,System.Int32,System.Asy
ncCallback,System.Object).
asyncResult: An System.IAsyncResult instance returned by a call to
System.Net.Security.NegotiateStream.BeginWrite(System.Byte[],System.Int32,System.Int32,System.Asy
ncCallback,System.Object)
"""
pass
def Flush(self):
"""
Flush(self: NegotiateStream)
Causes any buffered data to be written to the underlying device.
"""
pass
def MemberwiseClone(self, *args): #cannot find CLR method
"""
MemberwiseClone(self: MarshalByRefObject, cloneIdentity: bool) -> MarshalByRefObject
Creates a shallow copy of the current System.MarshalByRefObject object.
cloneIdentity: false to delete the current System.MarshalByRefObject object's identity, which will cause the
object to be assigned a new identity when it is marshaled across a remoting boundary. A value of
false is usually appropriate. true to copy the current System.MarshalByRefObject object's
identity to its clone, which will cause remoting client calls to be routed to the remote server
object.
Returns: A shallow copy of the current System.MarshalByRefObject object.
MemberwiseClone(self: object) -> object
Creates a shallow copy of the current System.Object.
Returns: A shallow copy of the current System.Object.
"""
pass
def ObjectInvariant(self, *args): #cannot find CLR method
"""
ObjectInvariant(self: Stream)
Provides support for a System.Diagnostics.Contracts.Contract.
"""
pass
def Read(self, buffer, offset, count):
"""
Read(self: NegotiateStream, buffer: Array[Byte], offset: int, count: int) -> int
Reads data from this stream and stores it in the specified array.
buffer: A System.Byte array that receives the bytes read from the stream.
offset: A System.Int32 containing the zero-based location in buffer at which to begin storing the data
read from this stream.
count: A System.Int32 containing the maximum number of bytes to read from the stream.
Returns: A System.Int32 value that specifies the number of bytes read from the underlying stream. When
there is no more data to be read, returns 0.
"""
pass
def Seek(self, offset, origin):
"""
Seek(self: NegotiateStream, offset: Int64, origin: SeekOrigin) -> Int64
Throws System.NotSupportedException.
offset: This value is ignored.
origin: This value is ignored.
Returns: Always throws a System.NotSupportedException.
"""
pass
def SetLength(self, value):
"""
SetLength(self: NegotiateStream, value: Int64)
Sets the length of the underlying stream.
value: An System.Int64 value that specifies the length of the stream.
"""
pass
def Write(self, buffer, offset, count):
"""
Write(self: NegotiateStream, buffer: Array[Byte], offset: int, count: int)
Write the specified number of System.Bytes to the underlying stream using the specified buffer
and offset.
buffer: A System.Byte array that supplies the bytes written to the stream.
offset: An System.Int32 containing the zero-based location in buffer at which to begin reading bytes to
be written to the stream.
count: A System.Int32 containing the number of bytes to read from buffer.
"""
pass
def __enter__(self, *args): #cannot find CLR method
"""
__enter__(self: IDisposable) -> object
Provides the implementation of __enter__ for objects which implement IDisposable.
"""
pass
def __exit__(self, *args): #cannot find CLR method
"""
__exit__(self: IDisposable, exc_type: object, exc_value: object, exc_back: object)
Provides the implementation of __exit__ for objects which implement IDisposable.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, innerStream, leaveInnerStreamOpen=None):
"""
__new__(cls: type, innerStream: Stream)
__new__(cls: type, innerStream: Stream, leaveInnerStreamOpen: bool)
"""
pass
CanRead = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the underlying stream is readable.
Get: CanRead(self: NegotiateStream) -> bool
"""
CanSeek = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the underlying stream is seekable.
Get: CanSeek(self: NegotiateStream) -> bool
"""
CanTimeout = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the underlying stream supports time-outs.
Get: CanTimeout(self: NegotiateStream) -> bool
"""
CanWrite = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the underlying stream is writable.
Get: CanWrite(self: NegotiateStream) -> bool
"""
ImpersonationLevel = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that indicates how the server can use the client's credentials.
Get: ImpersonationLevel(self: NegotiateStream) -> TokenImpersonationLevel
"""
InnerStream = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the stream used by this System.Net.Security.AuthenticatedStream for sending and receiving data.
"""
IsAuthenticated = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether authentication was successful.
Get: IsAuthenticated(self: NegotiateStream) -> bool
"""
IsEncrypted = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether this System.Net.Security.NegotiateStream uses data encryption.
Get: IsEncrypted(self: NegotiateStream) -> bool
"""
IsMutuallyAuthenticated = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether both the server and the client have been authenticated.
Get: IsMutuallyAuthenticated(self: NegotiateStream) -> bool
"""
IsServer = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the local side of the connection used by this System.Net.Security.NegotiateStream was authenticated as the server.
Get: IsServer(self: NegotiateStream) -> bool
"""
IsSigned = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the data sent using this stream is signed.
Get: IsSigned(self: NegotiateStream) -> bool
"""
Length = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the length of the underlying stream.
Get: Length(self: NegotiateStream) -> Int64
"""
Position = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the current position in the underlying stream.
Get: Position(self: NegotiateStream) -> Int64
Set: Position(self: NegotiateStream) = value
"""
ReadTimeout = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the amount of time a read operation blocks waiting for data.
Get: ReadTimeout(self: NegotiateStream) -> int
Set: ReadTimeout(self: NegotiateStream) = value
"""
RemoteIdentity = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets information about the identity of the remote party sharing this authenticated stream.
Get: RemoteIdentity(self: NegotiateStream) -> IIdentity
"""
WriteTimeout = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the amount of time a write operation blocks waiting for data.
Get: WriteTimeout(self: NegotiateStream) -> int
Set: WriteTimeout(self: NegotiateStream) = value
"""
class ProtectionLevel(Enum, IComparable, IFormattable, IConvertible):
"""
Indicates the security services requested for an authenticated stream.
enum ProtectionLevel, values: EncryptAndSign (2), None (0), Sign (1)
"""
def __eq__(self, *args): #cannot find CLR method
""" x.__eq__(y) <==> x==yx.__eq__(y) <==> x==yx.__eq__(y) <==> x==y """
pass
def __format__(self, *args): #cannot find CLR method
""" __format__(formattable: IFormattable, format: str) -> str """
pass
def __ge__(self, *args): #cannot find CLR method
pass
def __gt__(self, *args): #cannot find CLR method
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __le__(self, *args): #cannot find CLR method
pass
def __lt__(self, *args): #cannot find CLR method
pass
def __ne__(self, *args): #cannot find CLR method
pass
def __reduce_ex__(self, *args): #cannot find CLR method
pass
def __str__(self, *args): #cannot find CLR method
pass
EncryptAndSign = None
None = None
Sign = None
value__ = None
class RemoteCertificateValidationCallback(MulticastDelegate, ICloneable, ISerializable):
"""
Verifies the remote Secure Sockets Layer (SSL) certificate used for authentication.
RemoteCertificateValidationCallback(object: object, method: IntPtr)
"""
def BeginInvoke(self, sender, certificate, chain, sslPolicyErrors, callback, object):
""" BeginInvoke(self: RemoteCertificateValidationCallback, sender: object, certificate: X509Certificate, chain: X509Chain, sslPolicyErrors: SslPolicyErrors, callback: AsyncCallback, object: object) -> IAsyncResult """
pass
def CombineImpl(self, *args): #cannot find CLR method
"""
CombineImpl(self: MulticastDelegate, follow: Delegate) -> Delegate
Combines this System.Delegate with the specified System.Delegate to form a new delegate.
follow: The delegate to combine with this delegate.
Returns: A delegate that is the new root of the System.MulticastDelegate invocation list.
"""
pass
def DynamicInvokeImpl(self, *args): #cannot find CLR method
"""
DynamicInvokeImpl(self: Delegate, args: Array[object]) -> object
Dynamically invokes (late-bound) the method represented by the current delegate.
args: An array of objects that are the arguments to pass to the method represented by the current
delegate.-or- null, if the method represented by the current delegate does not require
arguments.
Returns: The object returned by the method represented by the delegate.
"""
pass
def EndInvoke(self, result):
""" EndInvoke(self: RemoteCertificateValidationCallback, result: IAsyncResult) -> bool """
pass
def GetMethodImpl(self, *args): #cannot find CLR method
"""
GetMethodImpl(self: MulticastDelegate) -> MethodInfo
Returns a static method represented by the current System.MulticastDelegate.
Returns: A static method represented by the current System.MulticastDelegate.
"""
pass
def Invoke(self, sender, certificate, chain, sslPolicyErrors):
""" Invoke(self: RemoteCertificateValidationCallback, sender: object, certificate: X509Certificate, chain: X509Chain, sslPolicyErrors: SslPolicyErrors) -> bool """
pass
def RemoveImpl(self, *args): #cannot find CLR method
"""
RemoveImpl(self: MulticastDelegate, value: Delegate) -> Delegate
Removes an element from the invocation list of this System.MulticastDelegate that is equal to
the specified delegate.
value: The delegate to search for in the invocation list.
Returns: If value is found in the invocation list for this instance, then a new System.Delegate without
value in its invocation list; otherwise, this instance with its original invocation list.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, object, method):
""" __new__(cls: type, object: object, method: IntPtr) """
pass
def __reduce_ex__(self, *args): #cannot find CLR method
pass
class SslPolicyErrors(Enum, IComparable, IFormattable, IConvertible):
"""
Enumerates Secure Socket Layer (SSL) policy errors.
enum (flags) SslPolicyErrors, values: None (0), RemoteCertificateChainErrors (4), RemoteCertificateNameMismatch (2), RemoteCertificateNotAvailable (1)
"""
def __eq__(self, *args): #cannot find CLR method
""" x.__eq__(y) <==> x==yx.__eq__(y) <==> x==yx.__eq__(y) <==> x==y """
pass
def __format__(self, *args): #cannot find CLR method
""" __format__(formattable: IFormattable, format: str) -> str """
pass
def __ge__(self, *args): #cannot find CLR method
pass
def __gt__(self, *args): #cannot find CLR method
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __le__(self, *args): #cannot find CLR method
pass
def __lt__(self, *args): #cannot find CLR method
pass
def __ne__(self, *args): #cannot find CLR method
pass
def __reduce_ex__(self, *args): #cannot find CLR method
pass
def __str__(self, *args): #cannot find CLR method
pass
None = None
RemoteCertificateChainErrors = None
RemoteCertificateNameMismatch = None
RemoteCertificateNotAvailable = None
value__ = None
class SslStream(AuthenticatedStream, IDisposable):
"""
Provides a stream used for client-server communication that uses the Secure Socket Layer (SSL) security protocol to authenticate the server and optionally the client.
SslStream(innerStream: Stream, leaveInnerStreamOpen: bool, userCertificateValidationCallback: RemoteCertificateValidationCallback, userCertificateSelectionCallback: LocalCertificateSelectionCallback)
SslStream(innerStream: Stream, leaveInnerStreamOpen: bool, userCertificateValidationCallback: RemoteCertificateValidationCallback, userCertificateSelectionCallback: LocalCertificateSelectionCallback, encryptionPolicy: EncryptionPolicy)
SslStream(innerStream: Stream)
SslStream(innerStream: Stream, leaveInnerStreamOpen: bool)
SslStream(innerStream: Stream, leaveInnerStreamOpen: bool, userCertificateValidationCallback: RemoteCertificateValidationCallback)
"""
def AuthenticateAsClient(self, targetHost, clientCertificates=None, *__args):
"""
AuthenticateAsClient(self: SslStream, targetHost: str, clientCertificates: X509CertificateCollection, checkCertificateRevocation: bool)AuthenticateAsClient(self: SslStream, targetHost: str)
Called by clients to authenticate the server and optionally the client in a client-server
connection.
targetHost: The name of the server that shares this System.Net.Security.SslStream.
AuthenticateAsClient(self: SslStream, targetHost: str, clientCertificates: X509CertificateCollection, enabledSslProtocols: SslProtocols, checkCertificateRevocation: bool)
Called by clients to authenticate the server and optionally the client in a client-server
connection. The authentication process uses the specified certificate collection and SSL
protocol.
targetHost: The name of the server that will share this System.Net.Security.SslStream.
clientCertificates: The System.Security.Cryptography.X509Certificates.X509CertificateCollection that contains client
certificates.
enabledSslProtocols: The System.Security.Authentication.SslProtocols value that represents the protocol used for
authentication.
checkCertificateRevocation: A System.Boolean value that specifies whether the certificate revocation list is checked during
authentication.
"""
pass
def AuthenticateAsClientAsync(self, targetHost, clientCertificates=None, *__args):
"""
AuthenticateAsClientAsync(self: SslStream, targetHost: str, clientCertificates: X509CertificateCollection, enabledSslProtocols: SslProtocols, checkCertificateRevocation: bool) -> Task
AuthenticateAsClientAsync(self: SslStream, targetHost: str, clientCertificates: X509CertificateCollection, checkCertificateRevocation: bool) -> Task
AuthenticateAsClientAsync(self: SslStream, targetHost: str) -> Task
"""
pass
def AuthenticateAsServer(self, serverCertificate, clientCertificateRequired=None, *__args):
"""
AuthenticateAsServer(self: SslStream, serverCertificate: X509Certificate, clientCertificateRequired: bool, enabledSslProtocols: SslProtocols, checkCertificateRevocation: bool)
Called by servers to begin an asynchronous operation to authenticate the server and optionally
the client using the specified certificates, requirements and security protocol.
serverCertificate: The X509Certificate used to authenticate the server.
clientCertificateRequired: A System.Boolean value that specifies whether the client must supply a certificate for
authentication.
enabledSslProtocols: The System.Security.Authentication.SslProtocols value that represents the protocol used for
authentication.
checkCertificateRevocation: A System.Boolean value that specifies whether the certificate revocation list is checked during
authentication.
AuthenticateAsServer(self: SslStream, serverCertificate: X509Certificate, clientCertificateRequired: bool, checkCertificateRevocation: bool)AuthenticateAsServer(self: SslStream, serverCertificate: X509Certificate)
Called by servers to authenticate the server and optionally the client in a client-server
connection using the specified certificate.
serverCertificate: The certificate used to authenticate the server.
"""
pass
def AuthenticateAsServerAsync(self, serverCertificate, clientCertificateRequired=None, *__args):
"""
AuthenticateAsServerAsync(self: SslStream, serverCertificate: X509Certificate, clientCertificateRequired: bool, enabledSslProtocols: SslProtocols, checkCertificateRevocation: bool) -> Task
AuthenticateAsServerAsync(self: SslStream, serverCertificate: X509Certificate, clientCertificateRequired: bool, checkCertificateRevocation: bool) -> Task
AuthenticateAsServerAsync(self: SslStream, serverCertificate: X509Certificate) -> Task
"""
pass
def BeginAuthenticateAsClient(self, targetHost, *__args):
"""
BeginAuthenticateAsClient(self: SslStream, targetHost: str, clientCertificates: X509CertificateCollection, enabledSslProtocols: SslProtocols, checkCertificateRevocation: bool, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by clients to begin an asynchronous operation to authenticate the server and optionally
the client using the specified certificates and security protocol.
targetHost: The name of the server that shares this System.Net.Security.SslStream.
clientCertificates: The System.Security.Cryptography.X509Certificates.X509CertificateCollection containing client
certificates.
enabledSslProtocols: The System.Security.Authentication.SslProtocols value that represents the protocol used for
authentication.
checkCertificateRevocation: A System.Boolean value that specifies whether the certificate revocation list is checked during
authentication.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object that contains information about the operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object that indicates the status of the asynchronous operation.
BeginAuthenticateAsClient(self: SslStream, targetHost: str, clientCertificates: X509CertificateCollection, checkCertificateRevocation: bool, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
BeginAuthenticateAsClient(self: SslStream, targetHost: str, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by clients to begin an asynchronous operation to authenticate the server and optionally
the client.
targetHost: The name of the server that shares this System.Net.Security.SslStream.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object that contains information about the operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object that indicates the status of the asynchronous operation.
"""
pass
def BeginAuthenticateAsServer(self, serverCertificate, *__args):
"""
BeginAuthenticateAsServer(self: SslStream, serverCertificate: X509Certificate, clientCertificateRequired: bool, enabledSslProtocols: SslProtocols, checkCertificateRevocation: bool, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by servers to begin an asynchronous operation to authenticate the server and optionally
the client using the specified certificates, requirements and security protocol.
serverCertificate: The X509Certificate used to authenticate the server.
clientCertificateRequired: A System.Boolean value that specifies whether the client must supply a certificate for
authentication.
enabledSslProtocols: The System.Security.Authentication.SslProtocols value that represents the protocol used for
authentication.
checkCertificateRevocation: A System.Boolean value that specifies whether the certificate revocation list is checked during
authentication.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object that contains information about the operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object that indicates the status of the asynchronous operation.
BeginAuthenticateAsServer(self: SslStream, serverCertificate: X509Certificate, clientCertificateRequired: bool, checkCertificateRevocation: bool, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
BeginAuthenticateAsServer(self: SslStream, serverCertificate: X509Certificate, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Called by servers to begin an asynchronous operation to authenticate the client and optionally
the server in a client-server connection.
serverCertificate: The X509Certificate used to authenticate the server.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the authentication is
complete.
asyncState: A user-defined object that contains information about the operation. This object is passed to
the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
"""
pass
def BeginRead(self, buffer, offset, count, asyncCallback, asyncState):
"""
BeginRead(self: SslStream, buffer: Array[Byte], offset: int, count: int, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Begins an asynchronous read operation that reads data from the stream and stores it in the
specified array.
buffer: A System.Byte array that receives the bytes read from the stream.
offset: The zero-based location in buffer at which to begin storing the data read from this stream.
count: The maximum number of bytes to read from the stream.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the read operation is
complete.
asyncState: A user-defined object that contains information about the read operation. This object is passed
to the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object that indicates the status of the asynchronous operation.
"""
pass
def BeginWrite(self, buffer, offset, count, asyncCallback, asyncState):
"""
BeginWrite(self: SslStream, buffer: Array[Byte], offset: int, count: int, asyncCallback: AsyncCallback, asyncState: object) -> IAsyncResult
Begins an asynchronous write operation that writes System.Bytes from the specified buffer to the
stream.
buffer: A System.Byte array that supplies the bytes to be written to the stream.
offset: The zero-based location in buffer at which to begin reading bytes to be written to the stream.
count: An System.Int32 value that specifies the number of bytes to read from buffer.
asyncCallback: An System.AsyncCallback delegate that references the method to invoke when the write operation
is complete.
asyncState: A user-defined object that contains information about the write operation. This object is passed
to the asyncCallback delegate when the operation completes.
Returns: An System.IAsyncResult object indicating the status of the asynchronous operation.
"""
pass
def CreateWaitHandle(self, *args): #cannot find CLR method
"""
CreateWaitHandle(self: Stream) -> WaitHandle
Allocates a System.Threading.WaitHandle object.
Returns: A reference to the allocated WaitHandle.
"""
pass
def Dispose(self):
"""
Dispose(self: SslStream, disposing: bool)
Releases the unmanaged resources used by the System.Net.Security.SslStream and optionally
releases the managed resources.
disposing: true to release both managed and unmanaged resources; false to release only unmanaged resources.
"""
pass
def EndAuthenticateAsClient(self, asyncResult):
"""
EndAuthenticateAsClient(self: SslStream, asyncResult: IAsyncResult)
Ends a pending asynchronous server authentication operation started with a previous call to
erload:System.Net.Security.SslStream.BeginAuthenticateAsServer.
asyncResult: An System.IAsyncResult instance returned by a call to
erload:System.Net.Security.SslStream.BeginAuthenticateAsServer.
"""
pass
def EndAuthenticateAsServer(self, asyncResult):
"""
EndAuthenticateAsServer(self: SslStream, asyncResult: IAsyncResult)
Ends a pending asynchronous client authentication operation started with a previous call to
erload:System.Net.Security.SslStream.BeginAuthenticateAsClient.
asyncResult: An System.IAsyncResult instance returned by a call to
erload:System.Net.Security.SslStream.BeginAuthenticateAsClient.
"""
pass
def EndRead(self, asyncResult):
"""
EndRead(self: SslStream, asyncResult: IAsyncResult) -> int
Ends an asynchronous read operation started with a previous call to
System.Net.Security.SslStream.BeginRead(System.Byte[],System.Int32,System.Int32,System.AsyncCallb
ack,System.Object).
asyncResult: An System.IAsyncResult instance returned by a call to
System.Net.Security.SslStream.BeginRead(System.Byte[],System.Int32,System.Int32,System.AsyncCallb
ack,System.Object)
Returns: A System.Int32 value that specifies the number of bytes read from the underlying stream.
"""
pass
def EndWrite(self, asyncResult):
"""
EndWrite(self: SslStream, asyncResult: IAsyncResult)
Ends an asynchronous write operation started with a previous call to
System.Net.Security.SslStream.BeginWrite(System.Byte[],System.Int32,System.Int32,System.AsyncCall
back,System.Object).
asyncResult: An System.IAsyncResult instance returned by a call to
System.Net.Security.SslStream.BeginWrite(System.Byte[],System.Int32,System.Int32,System.AsyncCall
back,System.Object)
"""
pass
def Flush(self):
"""
Flush(self: SslStream)
Causes any buffered data to be written to the underlying device.
"""
pass
def MemberwiseClone(self, *args): #cannot find CLR method
"""
MemberwiseClone(self: MarshalByRefObject, cloneIdentity: bool) -> MarshalByRefObject
Creates a shallow copy of the current System.MarshalByRefObject object.
cloneIdentity: false to delete the current System.MarshalByRefObject object's identity, which will cause the
object to be assigned a new identity when it is marshaled across a remoting boundary. A value of
false is usually appropriate. true to copy the current System.MarshalByRefObject object's
identity to its clone, which will cause remoting client calls to be routed to the remote server
object.
Returns: A shallow copy of the current System.MarshalByRefObject object.
MemberwiseClone(self: object) -> object
Creates a shallow copy of the current System.Object.
Returns: A shallow copy of the current System.Object.
"""
pass
def ObjectInvariant(self, *args): #cannot find CLR method
"""
ObjectInvariant(self: Stream)
Provides support for a System.Diagnostics.Contracts.Contract.
"""
pass
def Read(self, buffer, offset, count):
"""
Read(self: SslStream, buffer: Array[Byte], offset: int, count: int) -> int
Reads data from this stream and stores it in the specified array.
buffer: A System.Byte array that receives the bytes read from this stream.
offset: A System.Int32 that contains the zero-based location in buffer at which to begin storing the
data read from this stream.
count: A System.Int32 that contains the maximum number of bytes to read from this stream.
Returns: A System.Int32 value that specifies the number of bytes read. When there is no more data to be
read, returns 0.
"""
pass
def Seek(self, offset, origin):
"""
Seek(self: SslStream, offset: Int64, origin: SeekOrigin) -> Int64
Throws a System.NotSupportedException.
offset: This value is ignored.
origin: This value is ignored.
Returns: Always throws a System.NotSupportedException.
"""
pass
def SetLength(self, value):
"""
SetLength(self: SslStream, value: Int64)
Sets the length of the underlying stream.
value: An System.Int64 value that specifies the length of the stream.
"""
pass
def ShutdownAsync(self):
""" ShutdownAsync(self: SslStream) -> Task """
pass
def Write(self, buffer, offset=None, count=None):
"""
Write(self: SslStream, buffer: Array[Byte], offset: int, count: int)
Write the specified number of System.Bytes to the underlying stream using the specified buffer
and offset.
buffer: A System.Byte array that supplies the bytes written to the stream.
offset: A System.Int32 that contains the zero-based location in buffer at which to begin reading bytes
to be written to the stream.
count: A System.Int32 that contains the number of bytes to read from buffer.
Write(self: SslStream, buffer: Array[Byte])
Writes the specified data to this stream.
buffer: A System.Byte array that supplies the bytes written to the stream.
"""
pass
def __enter__(self, *args): #cannot find CLR method
"""
__enter__(self: IDisposable) -> object
Provides the implementation of __enter__ for objects which implement IDisposable.
"""
pass
def __exit__(self, *args): #cannot find CLR method
"""
__exit__(self: IDisposable, exc_type: object, exc_value: object, exc_back: object)
Provides the implementation of __exit__ for objects which implement IDisposable.
"""
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, innerStream, leaveInnerStreamOpen=None, userCertificateValidationCallback=None, userCertificateSelectionCallback=None, encryptionPolicy=None):
"""
__new__(cls: type, innerStream: Stream)
__new__(cls: type, innerStream: Stream, leaveInnerStreamOpen: bool)
__new__(cls: type, innerStream: Stream, leaveInnerStreamOpen: bool, userCertificateValidationCallback: RemoteCertificateValidationCallback)
__new__(cls: type, innerStream: Stream, leaveInnerStreamOpen: bool, userCertificateValidationCallback: RemoteCertificateValidationCallback, userCertificateSelectionCallback: LocalCertificateSelectionCallback)
__new__(cls: type, innerStream: Stream, leaveInnerStreamOpen: bool, userCertificateValidationCallback: RemoteCertificateValidationCallback, userCertificateSelectionCallback: LocalCertificateSelectionCallback, encryptionPolicy: EncryptionPolicy)
"""
pass
CanRead = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the underlying stream is readable.
Get: CanRead(self: SslStream) -> bool
"""
CanSeek = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the underlying stream is seekable.
Get: CanSeek(self: SslStream) -> bool
"""
CanTimeout = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the underlying stream supports time-outs.
Get: CanTimeout(self: SslStream) -> bool
"""
CanWrite = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the underlying stream is writable.
Get: CanWrite(self: SslStream) -> bool
"""
CheckCertRevocationStatus = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the certificate revocation list is checked during the certificate validation process.
Get: CheckCertRevocationStatus(self: SslStream) -> bool
"""
CipherAlgorithm = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that identifies the bulk encryption algorithm used by this System.Net.Security.SslStream.
Get: CipherAlgorithm(self: SslStream) -> CipherAlgorithmType
"""
CipherStrength = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that identifies the strength of the cipher algorithm used by this System.Net.Security.SslStream.
Get: CipherStrength(self: SslStream) -> int
"""
HashAlgorithm = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the algorithm used for generating message authentication codes (MACs).
Get: HashAlgorithm(self: SslStream) -> HashAlgorithmType
"""
HashStrength = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that identifies the strength of the hash algorithm used by this instance.
Get: HashStrength(self: SslStream) -> int
"""
InnerStream = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the stream used by this System.Net.Security.AuthenticatedStream for sending and receiving data.
"""
IsAuthenticated = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether authentication was successful.
Get: IsAuthenticated(self: SslStream) -> bool
"""
IsEncrypted = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether this System.Net.Security.SslStream uses data encryption.
Get: IsEncrypted(self: SslStream) -> bool
"""
IsMutuallyAuthenticated = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether both server and client have been authenticated.
Get: IsMutuallyAuthenticated(self: SslStream) -> bool
"""
IsServer = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the local side of the connection used by this System.Net.Security.SslStream was authenticated as the server.
Get: IsServer(self: SslStream) -> bool
"""
IsSigned = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a System.Boolean value that indicates whether the data sent using this stream is signed.
Get: IsSigned(self: SslStream) -> bool
"""
KeyExchangeAlgorithm = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the key exchange algorithm used by this System.Net.Security.SslStream.
Get: KeyExchangeAlgorithm(self: SslStream) -> ExchangeAlgorithmType
"""
KeyExchangeStrength = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that identifies the strength of the key exchange algorithm used by this instance.
Get: KeyExchangeStrength(self: SslStream) -> int
"""
Length = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the length of the underlying stream.
Get: Length(self: SslStream) -> Int64
"""
LocalCertificate = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the certificate used to authenticate the local endpoint.
Get: LocalCertificate(self: SslStream) -> X509Certificate
"""
Position = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the current position in the underlying stream.
Get: Position(self: SslStream) -> Int64
Set: Position(self: SslStream) = value
"""
ReadTimeout = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the amount of time a read operation blocks waiting for data.
Get: ReadTimeout(self: SslStream) -> int
Set: ReadTimeout(self: SslStream) = value
"""
RemoteCertificate = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the certificate used to authenticate the remote endpoint.
Get: RemoteCertificate(self: SslStream) -> X509Certificate
"""
SslProtocol = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets a value that indicates the security protocol used to authenticate this connection.
Get: SslProtocol(self: SslStream) -> SslProtocols
"""
TransportContext = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the System.Net.TransportContext used for authentication using extended protection.
Get: TransportContext(self: SslStream) -> TransportContext
"""
WriteTimeout = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the amount of time a write operation blocks waiting for data.
Get: WriteTimeout(self: SslStream) -> int
Set: WriteTimeout(self: SslStream) = value
"""
| 49.190755 | 291 | 0.667289 | 8,610 | 84,067 | 6.425319 | 0.05633 | 0.026029 | 0.017968 | 0.023101 | 0.90765 | 0.890206 | 0.880753 | 0.86349 | 0.823325 | 0.808322 | 0 | 0.003615 | 0.269535 | 84,067 | 1,708 | 292 | 49.219555 | 0.897277 | 0.026824 | 0 | 0.826498 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.375394 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
73b4f71b987554e342e25f8646bd14b18b7753aa | 37,232 | py | Python | pydy/codegen/tests/test_c_code.py | JanMV/pydy | 22c6a3965853bb3641c63e493717976d775034e2 | [
"BSD-3-Clause"
] | 298 | 2015-01-31T11:43:22.000Z | 2022-03-15T02:18:21.000Z | pydy/codegen/tests/test_c_code.py | JanMV/pydy | 22c6a3965853bb3641c63e493717976d775034e2 | [
"BSD-3-Clause"
] | 359 | 2015-01-17T16:56:42.000Z | 2022-02-08T05:27:08.000Z | pydy/codegen/tests/test_c_code.py | JanMV/pydy | 22c6a3965853bb3641c63e493717976d775034e2 | [
"BSD-3-Clause"
] | 109 | 2015-02-03T13:02:45.000Z | 2021-12-21T12:57:21.000Z | #!/usr/bin/env python
import os
from pkg_resources import parse_version
import sympy as sm
from nose.tools import assert_raises
from ...models import multi_mass_spring_damper
from ..c_code import CMatrixGenerator
SYMPY_VERSION = sm.__version__
class TestCMatrixGenerator():
def setup(self):
self.prefix = 'boogly_bee'
sys = multi_mass_spring_damper(6, True, True)
self.matrices = (sys.eom_method.mass_matrix,
sys.eom_method.forcing)
# NOTE : ordered is used here because this order is different in
# different versions of SymPy.
self.arguments = (list(sm.ordered(sys.constants_symbols)),
sys.coordinates,
sys.speeds,
list(sm.ordered(sys.specifieds_symbols)))
self.generator = CMatrixGenerator(self.arguments, self.matrices)
def test_init(self):
assert self.generator.matrices == self.matrices
assert self.generator.arguments == self.arguments
# Make sure an error is risen if not enough arguments are provided.
arguments = self.arguments[:-1]
assert_raises(ValueError, CMatrixGenerator, arguments, self.matrices)
def test_generate_cse(self):
pd = sm.symbols('pydy_:15')
(c0, c1, c2, c3, c4, c5, g, k0, k1, k2, k3, k4, k5, m0, m1, m2, m3,
m4, m5) = self.arguments[0]
x0, x1, x2, x3, x4, x5 = self.arguments[1]
v0, v1, v2, v3, v4, v5 = self.arguments[2]
f0, f1, f2, f3, f4, f5 = self.arguments[3]
if parse_version(SYMPY_VERSION) >= parse_version('1.2'):
expected_subexprs = [
(pd[0], m4 + m5),
(pd[1], m3 + pd[0]),
(pd[2], m2 + pd[1]),
(pd[3], m1 + pd[2]),
(pd[4], g*m5 + f5),
(pd[5], g*m4 + pd[4] + f4),
(pd[6], g*m3 + pd[5] + f3),
(pd[7], g*m2 + pd[6] + f2),
(pd[8], g*m1 + pd[7] + f1)]
expected_simplified_matrices = (
sm.Matrix([[m0 + pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[pd[2], pd[2], pd[2], pd[1], pd[0], m5],
[pd[1], pd[1], pd[1], pd[1], pd[0], m5],
[pd[0], pd[0], pd[0], pd[0], pd[0], m5],
[m5, m5, m5, m5, m5, m5]]),
sm.Matrix([[-c0*v0 + g*m0 - k0*x0 + pd[8] + f0],
[-c1*v1 - k1*x1 + pd[8]],
[-c2*v2 - k2*x2 + pd[7]],
[-c3*v3 - k3*x3 + pd[6]],
[-c4*v4 - k4*x4 + pd[5]],
[-c5*v5 - k5*x5 + pd[4]]]))
elif parse_version(SYMPY_VERSION) >= parse_version('1.1'):
expected_subexprs = [
(pd[0], m4 + m5),
(pd[1], m3 + pd[0]),
(pd[2], m2 + pd[1]),
(pd[3], m1 + pd[2]),
(pd[4], f2),
(pd[5], f3),
(pd[6], f4),
(pd[7], f5),
(pd[8], g*m2),
(pd[9], g*m3),
(pd[10], g*m4),
(pd[11], g*m5),
(pd[12], g*m1 + pd[10] + pd[11] + pd[4] + pd[5] + pd[6] +
pd[7] + pd[8] + pd[9] + f1)]
expected_simplified_matrices = (
sm.Matrix([[m0 + pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[pd[2], pd[2], pd[2], pd[1], pd[0], m5],
[pd[1], pd[1], pd[1], pd[1], pd[0], m5],
[pd[0], pd[0], pd[0], pd[0], pd[0], m5],
[m5, m5, m5, m5, m5, m5]]),
sm.Matrix([[-c0*v0 + g*m0 - k0*x0 + pd[12] + f0],
[-c1*v1 - k1*x1 + pd[12]],
[-c2*v2 - k2*x2 + pd[10] + pd[11] + pd[4] + pd[5] +
pd[6] + pd[7] + pd[8] + pd[9]],
[-c3*v3 - k3*x3 + pd[10] + pd[11] + pd[5] + pd[6] +
pd[7] + pd[9]],
[-c4*v4 - k4*x4 + pd[10] + pd[11] + pd[6] + pd[7]],
[-c5*v5 - k5*x5 + pd[11] + pd[7]]]))
elif parse_version(SYMPY_VERSION) > parse_version('1.0'):
expected_subexprs = [
(pd[0], m4 + m5),
(pd[1], m3 + pd[0]),
(pd[2], m2 + pd[1]),
(pd[3], m1 + pd[2]),
(pd[4], f2),
(pd[5], f3),
(pd[6], f4),
(pd[7], f5),
(pd[8], g*m2),
(pd[9], g*m3),
(pd[10], g*m4),
(pd[11], g*m5),
(pd[12], g*m1 + pd[10] + pd[11] + pd[4] + pd[5] + pd[6] +
pd[7] + pd[8] + pd[9] + f1),
(pd[13], pd[10] + pd[11] + pd[5] + pd[6] + pd[7] + pd[9]),
(pd[14], pd[11] + pd[7])]
expected_simplified_matrices = (
sm.Matrix([[m0 + pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[ pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[ pd[2], pd[2], pd[2], pd[1], pd[0], m5],
[ pd[1], pd[1], pd[1], pd[1], pd[0], m5],
[ pd[0], pd[0], pd[0], pd[0], pd[0], m5],
[ m5, m5, m5, m5, m5, m5]]),
sm.Matrix([[ -c0*v0 + g*m0 - k0*x0 + pd[12] + f0],
[ -c1*v1 - k1*x1 + pd[12]],
[ -c2*v2 - k2*x2 + pd[13] + pd[4] + pd[8]],
[ -c3*v3 - k3*x3 + pd[13]],
[-c4*v4 - k4*x4 + pd[10] + pd[14] + pd[6]],
[ -c5*v5 - k5*x5 + pd[14]]]))
else:
expected_subexprs = [
(pd[0], m4 + m5),
(pd[1], m3 + pd[0]),
(pd[2], m2 + pd[1]),
(pd[3], m1 + pd[2]),
(pd[4], f2),
(pd[5], f3),
(pd[6], f4),
(pd[7], f5),
(pd[8], g*m2),
(pd[9], g*m3),
(pd[10], g*m4),
(pd[11], g*m5),
(pd[12], (g*m1 + pd[10] + pd[11] + pd[4] + pd[5] + pd[6] +
pd[7] + pd[8] + pd[9] + f1))]
expected_simplified_matrices = (
sm.Matrix([[m0 + pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[pd[2], pd[2], pd[2], pd[1], pd[0], m5],
[pd[1], pd[1], pd[1], pd[1], pd[0], m5],
[pd[0], pd[0], pd[0], pd[0], pd[0], m5],
[m5, m5, m5, m5, m5, m5]]),
sm.Matrix([-c0*v0 + g*m0 - k0*x0 + pd[12] + f0,
-c1*v1 - k1*x1 + pd[12],
-c2*v2 - k2*x2 + pd[10] + pd[11] + pd[4] + pd[5] +
pd[6] + pd[7] + pd[8] + pd[9],
-c3*v3 - k3*x3 + pd[10] + pd[11] + pd[5] + pd[6] +
pd[7] + pd[9],
-c4*v4 - k4*x4 + pd[10] + pd[11] + pd[6] + pd[7],
-c5*v5 - k5*x5 + pd[11] + pd[7]]))
self.generator._generate_cse()
assert self.generator.subexprs == expected_subexprs
assert self.generator.simplified_matrices == expected_simplified_matrices
def test_skip_cse(self):
(c0, c1, c2, c3, c4, c5, g, k0, k1, k2, k3, k4, k5, m0, m1, m2, m3,
m4, m5) = self.arguments[0]
x0, x1, x2, x3, x4, x5 = self.arguments[1]
v0, v1, v2, v3, v4, v5 = self.arguments[2]
f0, f1, f2, f3, f4, f5 = self.arguments[3]
expected_subexprs = []
pd = 13 * [0]
pd[0] = m4 + m5
pd[1] = m3 + pd[0]
pd[2] = m2 + pd[1]
pd[3] = m1 + pd[2]
pd[4] = f2
pd[5] = f3
pd[6] = f4
pd[7] = f5
pd[8] = g*m2
pd[9] = g*m3
pd[10] = g*m4
pd[11] = g*m5
pd[12] = (g*m1 + pd[10] + pd[11] + pd[4] + pd[5] + pd[6] + pd[7] +
pd[8] + pd[9] + f1)
expected_simplified_matrices = (
sm.Matrix([[m0 + pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[pd[3], pd[3], pd[2], pd[1], pd[0], m5],
[pd[2], pd[2], pd[2], pd[1], pd[0], m5],
[pd[1], pd[1], pd[1], pd[1], pd[0], m5],
[pd[0], pd[0], pd[0], pd[0], pd[0], m5],
[m5, m5, m5, m5, m5, m5]]),
sm.Matrix([-c0*v0 + g*m0 - k0*x0 + pd[12] + f0,
-c1*v1 - k1*x1 + pd[12],
-c2*v2 - k2*x2 + pd[10] + pd[11] + pd[4] + pd[5] +
pd[6] + pd[7] + pd[8] + pd[9],
-c3*v3 - k3*x3 + pd[10] + pd[11] + pd[5] + pd[6] +
pd[7] + pd[9],
-c4*v4 - k4*x4 + pd[10] + pd[11] + pd[6] + pd[7],
-c5*v5 - k5*x5 + pd[11] + pd[7]]))
self.generator._ignore_cse()
assert self.generator.subexprs == expected_subexprs
assert self.generator.simplified_matrices == expected_simplified_matrices
def test_generate_pydy_printer(self):
PyDyCCodePrinter = self.generator._generate_pydy_printer()
printer = PyDyCCodePrinter()
assert printer.doprint(self.arguments[0][3]) == 'input_0[3]'
assert printer.doprint(self.arguments[1][5]) == 'input_1[5]'
assert printer.doprint(self.arguments[2][1]) == 'input_2[1]'
assert printer.doprint(self.arguments[3][2]) == 'input_3[2]'
def test_generate_comma_lists(self):
expected = (('c0, c1, c2, c3, c4, c5, g, k0, k1, k2, k3, k4, k5, '
'm0, m1, m2, m3, m4, m5'),
'x0(t), x1(t), x2(t), x3(t), x4(t), x5(t)',
'v0(t), v1(t), v2(t), v3(t), v4(t), v5(t)',
'f0(t), f1(t), f2(t), f3(t), f4(t), f5(t)')
assert self.generator.comma_lists() == expected
def test_generate_code_blocks(self):
expected = {}
expected['input_args'] = \
"""\
double input_0[19],
double input_1[6],
double input_2[6],
double input_3[6],\
"""
expected['output_args'] = \
"""\
double output_0[36],
double output_1[6]\
"""
expected['input_docstring'] = \
"""\
input_0[19] : [c0, c1, c2, c3, c4, c5, g, k0, k1, k2, k3, k4, k5, m0, m1, m2,
m3, m4, m5]
input_1[6] : [x0(t), x1(t), x2(t), x3(t), x4(t), x5(t)]
input_2[6] : [v0(t), v1(t), v2(t), v3(t), v4(t), v5(t)]
input_3[6] : [f0(t), f1(t), f2(t), f3(t), f4(t), f5(t)]\
"""
if parse_version(SYMPY_VERSION) >= parse_version('1.2'):
expected['subexprs'] = \
"""\
double pydy_0 = input_0[17] + input_0[18];
double pydy_1 = input_0[16] + pydy_0;
double pydy_2 = input_0[15] + pydy_1;
double pydy_3 = input_0[14] + pydy_2;
double pydy_4 = input_0[6] * input_0[18] + input_3[5];
double pydy_5 = input_0[6] * input_0[17] + pydy_4 + input_3[4];
double pydy_6 = input_0[6] * input_0[16] + pydy_5 + input_3[3];
double pydy_7 = input_0[6] * input_0[15] + pydy_6 + input_3[2];
double pydy_8 = input_0[6] * input_0[14] + pydy_7 + input_3[1];\
"""
expected['outputs'] = \
"""\
output_0[0] = input_0[13] + pydy_3;
output_0[1] = pydy_3;
output_0[2] = pydy_2;
output_0[3] = pydy_1;
output_0[4] = pydy_0;
output_0[5] = input_0[18];
output_0[6] = pydy_3;
output_0[7] = pydy_3;
output_0[8] = pydy_2;
output_0[9] = pydy_1;
output_0[10] = pydy_0;
output_0[11] = input_0[18];
output_0[12] = pydy_2;
output_0[13] = pydy_2;
output_0[14] = pydy_2;
output_0[15] = pydy_1;
output_0[16] = pydy_0;
output_0[17] = input_0[18];
output_0[18] = pydy_1;
output_0[19] = pydy_1;
output_0[20] = pydy_1;
output_0[21] = pydy_1;
output_0[22] = pydy_0;
output_0[23] = input_0[18];
output_0[24] = pydy_0;
output_0[25] = pydy_0;
output_0[26] = pydy_0;
output_0[27] = pydy_0;
output_0[28] = pydy_0;
output_0[29] = input_0[18];
output_0[30] = input_0[18];
output_0[31] = input_0[18];
output_0[32] = input_0[18];
output_0[33] = input_0[18];
output_0[34] = input_0[18];
output_0[35] = input_0[18];
output_1[0] = -input_0[0] * input_2[0] + input_0[6] * input_0[13] -
input_0[7] * input_1[0] + pydy_8 + input_3[0];
output_1[1] = -input_0[1] * input_2[1] - input_0[8] * input_1[1] + pydy_8;
output_1[2] = -input_0[2] * input_2[2] - input_0[9] * input_1[2] + pydy_7;
output_1[3] = -input_0[3] * input_2[3] - input_0[10] * input_1[3] + pydy_6;
output_1[4] = -input_0[4] * input_2[4] - input_0[11] * input_1[4] + pydy_5;
output_1[5] = -input_0[5] * input_2[5] - input_0[12] * input_1[5] + pydy_4;\
"""
elif parse_version(SYMPY_VERSION) >= parse_version('1.1'):
expected['subexprs'] = \
"""\
double pydy_0 = input_0[17] + input_0[18];
double pydy_1 = input_0[16] + pydy_0;
double pydy_2 = input_0[15] + pydy_1;
double pydy_3 = input_0[14] + pydy_2;
double pydy_4 = input_3[2];
double pydy_5 = input_3[3];
double pydy_6 = input_3[4];
double pydy_7 = input_3[5];
double pydy_8 = input_0[6] * input_0[15];
double pydy_9 = input_0[6] * input_0[16];
double pydy_10 = input_0[6] * input_0[17];
double pydy_11 = input_0[6] * input_0[18];
double pydy_12 = input_0[6] * input_0[14] + pydy_10 + pydy_11 + pydy_4 +
pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9 + input_3[1];\
"""
expected['outputs'] = \
"""\
output_0[0] = input_0[13] + pydy_3;
output_0[1] = pydy_3;
output_0[2] = pydy_2;
output_0[3] = pydy_1;
output_0[4] = pydy_0;
output_0[5] = input_0[18];
output_0[6] = pydy_3;
output_0[7] = pydy_3;
output_0[8] = pydy_2;
output_0[9] = pydy_1;
output_0[10] = pydy_0;
output_0[11] = input_0[18];
output_0[12] = pydy_2;
output_0[13] = pydy_2;
output_0[14] = pydy_2;
output_0[15] = pydy_1;
output_0[16] = pydy_0;
output_0[17] = input_0[18];
output_0[18] = pydy_1;
output_0[19] = pydy_1;
output_0[20] = pydy_1;
output_0[21] = pydy_1;
output_0[22] = pydy_0;
output_0[23] = input_0[18];
output_0[24] = pydy_0;
output_0[25] = pydy_0;
output_0[26] = pydy_0;
output_0[27] = pydy_0;
output_0[28] = pydy_0;
output_0[29] = input_0[18];
output_0[30] = input_0[18];
output_0[31] = input_0[18];
output_0[32] = input_0[18];
output_0[33] = input_0[18];
output_0[34] = input_0[18];
output_0[35] = input_0[18];
output_1[0] = -input_0[0] * input_2[0] + input_0[6] * input_0[13] -
input_0[7] * input_1[0] + pydy_12 + input_3[0];
output_1[1] = -input_0[1] * input_2[1] - input_0[8] * input_1[1] + pydy_12;
output_1[2] = -input_0[2] * input_2[2] - input_0[9] * input_1[2] + pydy_10
+ pydy_11 + pydy_4 + pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9;
output_1[3] = -input_0[3] * input_2[3] - input_0[10] * input_1[3] + pydy_10
+ pydy_11 + pydy_5 + pydy_6 + pydy_7 + pydy_9;
output_1[4] = -input_0[4] * input_2[4] - input_0[11] * input_1[4] + pydy_10
+ pydy_11 + pydy_6 + pydy_7;
output_1[5] = -input_0[5] * input_2[5] - input_0[12] * input_1[5] + pydy_11
+ pydy_7;\
"""
elif parse_version(SYMPY_VERSION) > parse_version('1.0'):
expected['subexprs'] = \
"""\
double pydy_0 = input_0[17] + input_0[18];
double pydy_1 = input_0[16] + pydy_0;
double pydy_2 = input_0[15] + pydy_1;
double pydy_3 = input_0[14] + pydy_2;
double pydy_4 = input_3[2];
double pydy_5 = input_3[3];
double pydy_6 = input_3[4];
double pydy_7 = input_3[5];
double pydy_8 = input_0[6] * input_0[15];
double pydy_9 = input_0[6] * input_0[16];
double pydy_10 = input_0[6] * input_0[17];
double pydy_11 = input_0[6] * input_0[18];
double pydy_12 = input_0[6] * input_0[14] + pydy_10 + pydy_11 + pydy_4 +
pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9 + input_3[1];
double pydy_13 = pydy_10 + pydy_11 + pydy_5 + pydy_6 + pydy_7 + pydy_9;
double pydy_14 = pydy_11 + pydy_7;\
"""
expected['outputs'] = \
"""\
output_0[0] = input_0[13] + pydy_3;
output_0[1] = pydy_3;
output_0[2] = pydy_2;
output_0[3] = pydy_1;
output_0[4] = pydy_0;
output_0[5] = input_0[18];
output_0[6] = pydy_3;
output_0[7] = pydy_3;
output_0[8] = pydy_2;
output_0[9] = pydy_1;
output_0[10] = pydy_0;
output_0[11] = input_0[18];
output_0[12] = pydy_2;
output_0[13] = pydy_2;
output_0[14] = pydy_2;
output_0[15] = pydy_1;
output_0[16] = pydy_0;
output_0[17] = input_0[18];
output_0[18] = pydy_1;
output_0[19] = pydy_1;
output_0[20] = pydy_1;
output_0[21] = pydy_1;
output_0[22] = pydy_0;
output_0[23] = input_0[18];
output_0[24] = pydy_0;
output_0[25] = pydy_0;
output_0[26] = pydy_0;
output_0[27] = pydy_0;
output_0[28] = pydy_0;
output_0[29] = input_0[18];
output_0[30] = input_0[18];
output_0[31] = input_0[18];
output_0[32] = input_0[18];
output_0[33] = input_0[18];
output_0[34] = input_0[18];
output_0[35] = input_0[18];
output_1[0] = -input_0[0] * input_2[0] + input_0[6] * input_0[13] -
input_0[7] * input_1[0] + pydy_12 + input_3[0];
output_1[1] = -input_0[1] * input_2[1] - input_0[8] * input_1[1] + pydy_12;
output_1[2] = -input_0[2] * input_2[2] - input_0[9] * input_1[2] + pydy_13
+ pydy_4 + pydy_8;
output_1[3] = -input_0[3] * input_2[3] - input_0[10] * input_1[3] +
pydy_13;
output_1[4] = -input_0[4] * input_2[4] - input_0[11] * input_1[4] + pydy_10
+ pydy_14 + pydy_6;
output_1[5] = -input_0[5] * input_2[5] - input_0[12] * input_1[5] +
pydy_14;\
"""
else:
expected['subexprs'] = \
"""\
double pydy_0 = input_0[17] + input_0[18];
double pydy_1 = input_0[16] + pydy_0;
double pydy_2 = input_0[15] + pydy_1;
double pydy_3 = input_0[14] + pydy_2;
double pydy_4 = input_3[2];
double pydy_5 = input_3[3];
double pydy_6 = input_3[4];
double pydy_7 = input_3[5];
double pydy_8 = input_0[6] * input_0[15];
double pydy_9 = input_0[6] * input_0[16];
double pydy_10 = input_0[6] * input_0[17];
double pydy_11 = input_0[6] * input_0[18];
double pydy_12 = input_0[6] * input_0[14] + pydy_10 + pydy_11 + pydy_4 +
pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9 + input_3[1];\
"""
expected['outputs'] = \
"""\
output_0[0] = input_0[13] + pydy_3;
output_0[1] = pydy_3;
output_0[2] = pydy_2;
output_0[3] = pydy_1;
output_0[4] = pydy_0;
output_0[5] = input_0[18];
output_0[6] = pydy_3;
output_0[7] = pydy_3;
output_0[8] = pydy_2;
output_0[9] = pydy_1;
output_0[10] = pydy_0;
output_0[11] = input_0[18];
output_0[12] = pydy_2;
output_0[13] = pydy_2;
output_0[14] = pydy_2;
output_0[15] = pydy_1;
output_0[16] = pydy_0;
output_0[17] = input_0[18];
output_0[18] = pydy_1;
output_0[19] = pydy_1;
output_0[20] = pydy_1;
output_0[21] = pydy_1;
output_0[22] = pydy_0;
output_0[23] = input_0[18];
output_0[24] = pydy_0;
output_0[25] = pydy_0;
output_0[26] = pydy_0;
output_0[27] = pydy_0;
output_0[28] = pydy_0;
output_0[29] = input_0[18];
output_0[30] = input_0[18];
output_0[31] = input_0[18];
output_0[32] = input_0[18];
output_0[33] = input_0[18];
output_0[34] = input_0[18];
output_0[35] = input_0[18];
output_1[0] = -input_0[0] * input_2[0] + input_0[6] * input_0[13] -
input_0[7] * input_1[0] + pydy_12 + input_3[0];
output_1[1] = -input_0[1] * input_2[1] - input_0[8] * input_1[1] + pydy_12;
output_1[2] = -input_0[2] * input_2[2] - input_0[9] * input_1[2] + pydy_10
+ pydy_11 + pydy_4 + pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9;
output_1[3] = -input_0[3] * input_2[3] - input_0[10] * input_1[3] + pydy_10
+ pydy_11 + pydy_5 + pydy_6 + pydy_7 + pydy_9;
output_1[4] = -input_0[4] * input_2[4] - input_0[11] * input_1[4] + pydy_10
+ pydy_11 + pydy_6 + pydy_7;
output_1[5] = -input_0[5] * input_2[5] - input_0[12] * input_1[5] + pydy_11
+ pydy_7;\
"""
self.generator._generate_cse()
self.generator._generate_code_blocks()
for k, v in self.generator.code_blocks.items():
assert v == expected[k]
def test_generate_code_blocks_without_cse(self):
expected = {}
expected['input_args'] = \
"""\
double input_0[19],
double input_1[6],
double input_2[6],
double input_3[6],\
"""
expected['output_args'] = \
"""\
double output_0[36],
double output_1[6]\
"""
expected['input_docstring'] = \
"""\
input_0[19] : [c0, c1, c2, c3, c4, c5, g, k0, k1, k2, k3, k4, k5, m0, m1, m2,
m3, m4, m5]
input_1[6] : [x0(t), x1(t), x2(t), x3(t), x4(t), x5(t)]
input_2[6] : [v0(t), v1(t), v2(t), v3(t), v4(t), v5(t)]
input_3[6] : [f0(t), f1(t), f2(t), f3(t), f4(t), f5(t)]\
"""
expected['subexprs'] = \
""" \
"""
expected['outputs'] = \
"""\
output_0[0] = input_0[13] + input_0[14] + input_0[15] + input_0[16] +
input_0[17] + input_0[18];
output_0[1] = input_0[14] + input_0[15] + input_0[16] + input_0[17] +
input_0[18];
output_0[2] = input_0[15] + input_0[16] + input_0[17] + input_0[18];
output_0[3] = input_0[16] + input_0[17] + input_0[18];
output_0[4] = input_0[17] + input_0[18];
output_0[5] = input_0[18];
output_0[6] = input_0[14] + input_0[15] + input_0[16] + input_0[17] +
input_0[18];
output_0[7] = input_0[14] + input_0[15] + input_0[16] + input_0[17] +
input_0[18];
output_0[8] = input_0[15] + input_0[16] + input_0[17] + input_0[18];
output_0[9] = input_0[16] + input_0[17] + input_0[18];
output_0[10] = input_0[17] + input_0[18];
output_0[11] = input_0[18];
output_0[12] = input_0[15] + input_0[16] + input_0[17] + input_0[18];
output_0[13] = input_0[15] + input_0[16] + input_0[17] + input_0[18];
output_0[14] = input_0[15] + input_0[16] + input_0[17] + input_0[18];
output_0[15] = input_0[16] + input_0[17] + input_0[18];
output_0[16] = input_0[17] + input_0[18];
output_0[17] = input_0[18];
output_0[18] = input_0[16] + input_0[17] + input_0[18];
output_0[19] = input_0[16] + input_0[17] + input_0[18];
output_0[20] = input_0[16] + input_0[17] + input_0[18];
output_0[21] = input_0[16] + input_0[17] + input_0[18];
output_0[22] = input_0[17] + input_0[18];
output_0[23] = input_0[18];
output_0[24] = input_0[17] + input_0[18];
output_0[25] = input_0[17] + input_0[18];
output_0[26] = input_0[17] + input_0[18];
output_0[27] = input_0[17] + input_0[18];
output_0[28] = input_0[17] + input_0[18];
output_0[29] = input_0[18];
output_0[30] = input_0[18];
output_0[31] = input_0[18];
output_0[32] = input_0[18];
output_0[33] = input_0[18];
output_0[34] = input_0[18];
output_0[35] = input_0[18];
output_1[0] = -input_0[0] * input_2[0] + input_0[6] * input_0[13] +
input_0[6] * input_0[14] + input_0[6] * input_0[15] + input_0[6] *
input_0[16] + input_0[6] * input_0[17] + input_0[6] * input_0[18] -
input_0[7] * input_1[0] + input_3[0] + input_3[1] + input_3[2] + input_3[3]
+ input_3[4] + input_3[5];
output_1[1] = -input_0[1] * input_2[1] + input_0[6] * input_0[14] +
input_0[6] * input_0[15] + input_0[6] * input_0[16] + input_0[6] *
input_0[17] + input_0[6] * input_0[18] - input_0[8] * input_1[1] +
input_3[1] + input_3[2] + input_3[3] + input_3[4] + input_3[5];
output_1[2] = -input_0[2] * input_2[2] + input_0[6] * input_0[15] +
input_0[6] * input_0[16] + input_0[6] * input_0[17] + input_0[6] *
input_0[18] - input_0[9] * input_1[2] + input_3[2] + input_3[3] +
input_3[4] + input_3[5];
output_1[3] = -input_0[3] * input_2[3] + input_0[6] * input_0[16] +
input_0[6] * input_0[17] + input_0[6] * input_0[18] - input_0[10] *
input_1[3] + input_3[3] + input_3[4] + input_3[5];
output_1[4] = -input_0[4] * input_2[4] + input_0[6] * input_0[17] +
input_0[6] * input_0[18] - input_0[11] * input_1[4] + input_3[4] +
input_3[5];
output_1[5] = -input_0[5] * input_2[5] + input_0[6] * input_0[18] -
input_0[12] * input_1[5] + input_3[5];\
"""
self.generator._ignore_cse()
self.generator._generate_code_blocks()
for k, v in self.generator.code_blocks.items():
assert v == expected[k]
def test_doprint(self):
expected_header = """\
void evaluate(
double input_0[19],
double input_1[6],
double input_2[6],
double input_3[6],
double output_0[36],
double output_1[6]
);
/*
input_0[19] : [c0, c1, c2, c3, c4, c5, g, k0, k1, k2, k3, k4, k5, m0, m1, m2,
m3, m4, m5]
input_1[6] : [x0(t), x1(t), x2(t), x3(t), x4(t), x5(t)]
input_2[6] : [v0(t), v1(t), v2(t), v3(t), v4(t), v5(t)]
input_3[6] : [f0(t), f1(t), f2(t), f3(t), f4(t), f5(t)]
*/\
"""
if parse_version(SYMPY_VERSION) >= parse_version('1.2'):
expected_source = """\
#include <math.h>
#include "boogly_bee.h"
void evaluate(
double input_0[19],
double input_1[6],
double input_2[6],
double input_3[6],
double output_0[36],
double output_1[6]
)
{
double pydy_0 = input_0[17] + input_0[18];
double pydy_1 = input_0[16] + pydy_0;
double pydy_2 = input_0[15] + pydy_1;
double pydy_3 = input_0[14] + pydy_2;
double pydy_4 = input_0[6] * input_0[18] + input_3[5];
double pydy_5 = input_0[6] * input_0[17] + pydy_4 + input_3[4];
double pydy_6 = input_0[6] * input_0[16] + pydy_5 + input_3[3];
double pydy_7 = input_0[6] * input_0[15] + pydy_6 + input_3[2];
double pydy_8 = input_0[6] * input_0[14] + pydy_7 + input_3[1];
output_0[0] = input_0[13] + pydy_3;
output_0[1] = pydy_3;
output_0[2] = pydy_2;
output_0[3] = pydy_1;
output_0[4] = pydy_0;
output_0[5] = input_0[18];
output_0[6] = pydy_3;
output_0[7] = pydy_3;
output_0[8] = pydy_2;
output_0[9] = pydy_1;
output_0[10] = pydy_0;
output_0[11] = input_0[18];
output_0[12] = pydy_2;
output_0[13] = pydy_2;
output_0[14] = pydy_2;
output_0[15] = pydy_1;
output_0[16] = pydy_0;
output_0[17] = input_0[18];
output_0[18] = pydy_1;
output_0[19] = pydy_1;
output_0[20] = pydy_1;
output_0[21] = pydy_1;
output_0[22] = pydy_0;
output_0[23] = input_0[18];
output_0[24] = pydy_0;
output_0[25] = pydy_0;
output_0[26] = pydy_0;
output_0[27] = pydy_0;
output_0[28] = pydy_0;
output_0[29] = input_0[18];
output_0[30] = input_0[18];
output_0[31] = input_0[18];
output_0[32] = input_0[18];
output_0[33] = input_0[18];
output_0[34] = input_0[18];
output_0[35] = input_0[18];
output_1[0] = -input_0[0] * input_2[0] + input_0[6] * input_0[13] -
input_0[7] * input_1[0] + pydy_8 + input_3[0];
output_1[1] = -input_0[1] * input_2[1] - input_0[8] * input_1[1] + pydy_8;
output_1[2] = -input_0[2] * input_2[2] - input_0[9] * input_1[2] + pydy_7;
output_1[3] = -input_0[3] * input_2[3] - input_0[10] * input_1[3] + pydy_6;
output_1[4] = -input_0[4] * input_2[4] - input_0[11] * input_1[4] + pydy_5;
output_1[5] = -input_0[5] * input_2[5] - input_0[12] * input_1[5] + pydy_4;
}\
"""
elif parse_version(SYMPY_VERSION) >= parse_version('1.1'):
expected_source = """\
#include <math.h>
#include "boogly_bee.h"
void evaluate(
double input_0[19],
double input_1[6],
double input_2[6],
double input_3[6],
double output_0[36],
double output_1[6]
)
{
double pydy_0 = input_0[17] + input_0[18];
double pydy_1 = input_0[16] + pydy_0;
double pydy_2 = input_0[15] + pydy_1;
double pydy_3 = input_0[14] + pydy_2;
double pydy_4 = input_3[2];
double pydy_5 = input_3[3];
double pydy_6 = input_3[4];
double pydy_7 = input_3[5];
double pydy_8 = input_0[6] * input_0[15];
double pydy_9 = input_0[6] * input_0[16];
double pydy_10 = input_0[6] * input_0[17];
double pydy_11 = input_0[6] * input_0[18];
double pydy_12 = input_0[6] * input_0[14] + pydy_10 + pydy_11 + pydy_4 +
pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9 + input_3[1];
output_0[0] = input_0[13] + pydy_3;
output_0[1] = pydy_3;
output_0[2] = pydy_2;
output_0[3] = pydy_1;
output_0[4] = pydy_0;
output_0[5] = input_0[18];
output_0[6] = pydy_3;
output_0[7] = pydy_3;
output_0[8] = pydy_2;
output_0[9] = pydy_1;
output_0[10] = pydy_0;
output_0[11] = input_0[18];
output_0[12] = pydy_2;
output_0[13] = pydy_2;
output_0[14] = pydy_2;
output_0[15] = pydy_1;
output_0[16] = pydy_0;
output_0[17] = input_0[18];
output_0[18] = pydy_1;
output_0[19] = pydy_1;
output_0[20] = pydy_1;
output_0[21] = pydy_1;
output_0[22] = pydy_0;
output_0[23] = input_0[18];
output_0[24] = pydy_0;
output_0[25] = pydy_0;
output_0[26] = pydy_0;
output_0[27] = pydy_0;
output_0[28] = pydy_0;
output_0[29] = input_0[18];
output_0[30] = input_0[18];
output_0[31] = input_0[18];
output_0[32] = input_0[18];
output_0[33] = input_0[18];
output_0[34] = input_0[18];
output_0[35] = input_0[18];
output_1[0] = -input_0[0] * input_2[0] + input_0[6] * input_0[13] -
input_0[7] * input_1[0] + pydy_12 + input_3[0];
output_1[1] = -input_0[1] * input_2[1] - input_0[8] * input_1[1] + pydy_12;
output_1[2] = -input_0[2] * input_2[2] - input_0[9] * input_1[2] + pydy_10
+ pydy_11 + pydy_4 + pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9;
output_1[3] = -input_0[3] * input_2[3] - input_0[10] * input_1[3] + pydy_10
+ pydy_11 + pydy_5 + pydy_6 + pydy_7 + pydy_9;
output_1[4] = -input_0[4] * input_2[4] - input_0[11] * input_1[4] + pydy_10
+ pydy_11 + pydy_6 + pydy_7;
output_1[5] = -input_0[5] * input_2[5] - input_0[12] * input_1[5] + pydy_11
+ pydy_7;
}\
"""
elif parse_version(SYMPY_VERSION) > parse_version('1.0'):
expected_source = """\
#include <math.h>
#include "boogly_bee.h"
void evaluate(
double input_0[19],
double input_1[6],
double input_2[6],
double input_3[6],
double output_0[36],
double output_1[6]
)
{
double pydy_0 = input_0[17] + input_0[18];
double pydy_1 = input_0[16] + pydy_0;
double pydy_2 = input_0[15] + pydy_1;
double pydy_3 = input_0[14] + pydy_2;
double pydy_4 = input_3[2];
double pydy_5 = input_3[3];
double pydy_6 = input_3[4];
double pydy_7 = input_3[5];
double pydy_8 = input_0[6] * input_0[15];
double pydy_9 = input_0[6] * input_0[16];
double pydy_10 = input_0[6] * input_0[17];
double pydy_11 = input_0[6] * input_0[18];
double pydy_12 = input_0[6] * input_0[14] + pydy_10 + pydy_11 + pydy_4 +
pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9 + input_3[1];
double pydy_13 = pydy_10 + pydy_11 + pydy_5 + pydy_6 + pydy_7 + pydy_9;
double pydy_14 = pydy_11 + pydy_7;
output_0[0] = input_0[13] + pydy_3;
output_0[1] = pydy_3;
output_0[2] = pydy_2;
output_0[3] = pydy_1;
output_0[4] = pydy_0;
output_0[5] = input_0[18];
output_0[6] = pydy_3;
output_0[7] = pydy_3;
output_0[8] = pydy_2;
output_0[9] = pydy_1;
output_0[10] = pydy_0;
output_0[11] = input_0[18];
output_0[12] = pydy_2;
output_0[13] = pydy_2;
output_0[14] = pydy_2;
output_0[15] = pydy_1;
output_0[16] = pydy_0;
output_0[17] = input_0[18];
output_0[18] = pydy_1;
output_0[19] = pydy_1;
output_0[20] = pydy_1;
output_0[21] = pydy_1;
output_0[22] = pydy_0;
output_0[23] = input_0[18];
output_0[24] = pydy_0;
output_0[25] = pydy_0;
output_0[26] = pydy_0;
output_0[27] = pydy_0;
output_0[28] = pydy_0;
output_0[29] = input_0[18];
output_0[30] = input_0[18];
output_0[31] = input_0[18];
output_0[32] = input_0[18];
output_0[33] = input_0[18];
output_0[34] = input_0[18];
output_0[35] = input_0[18];
output_1[0] = -input_0[0] * input_2[0] + input_0[6] * input_0[13] -
input_0[7] * input_1[0] + pydy_12 + input_3[0];
output_1[1] = -input_0[1] * input_2[1] - input_0[8] * input_1[1] + pydy_12;
output_1[2] = -input_0[2] * input_2[2] - input_0[9] * input_1[2] + pydy_13
+ pydy_4 + pydy_8;
output_1[3] = -input_0[3] * input_2[3] - input_0[10] * input_1[3] +
pydy_13;
output_1[4] = -input_0[4] * input_2[4] - input_0[11] * input_1[4] + pydy_10
+ pydy_14 + pydy_6;
output_1[5] = -input_0[5] * input_2[5] - input_0[12] * input_1[5] +
pydy_14;
}\
"""
else:
expected_source = """\
#include <math.h>
#include "boogly_bee.h"
void evaluate(
double input_0[19],
double input_1[6],
double input_2[6],
double input_3[6],
double output_0[36],
double output_1[6]
)
{
double pydy_0 = input_0[17] + input_0[18];
double pydy_1 = input_0[16] + pydy_0;
double pydy_2 = input_0[15] + pydy_1;
double pydy_3 = input_0[14] + pydy_2;
double pydy_4 = input_3[2];
double pydy_5 = input_3[3];
double pydy_6 = input_3[4];
double pydy_7 = input_3[5];
double pydy_8 = input_0[6] * input_0[15];
double pydy_9 = input_0[6] * input_0[16];
double pydy_10 = input_0[6] * input_0[17];
double pydy_11 = input_0[6] * input_0[18];
double pydy_12 = input_0[6] * input_0[14] + pydy_10 + pydy_11 + pydy_4 +
pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9 + input_3[1];
output_0[0] = input_0[13] + pydy_3;
output_0[1] = pydy_3;
output_0[2] = pydy_2;
output_0[3] = pydy_1;
output_0[4] = pydy_0;
output_0[5] = input_0[18];
output_0[6] = pydy_3;
output_0[7] = pydy_3;
output_0[8] = pydy_2;
output_0[9] = pydy_1;
output_0[10] = pydy_0;
output_0[11] = input_0[18];
output_0[12] = pydy_2;
output_0[13] = pydy_2;
output_0[14] = pydy_2;
output_0[15] = pydy_1;
output_0[16] = pydy_0;
output_0[17] = input_0[18];
output_0[18] = pydy_1;
output_0[19] = pydy_1;
output_0[20] = pydy_1;
output_0[21] = pydy_1;
output_0[22] = pydy_0;
output_0[23] = input_0[18];
output_0[24] = pydy_0;
output_0[25] = pydy_0;
output_0[26] = pydy_0;
output_0[27] = pydy_0;
output_0[28] = pydy_0;
output_0[29] = input_0[18];
output_0[30] = input_0[18];
output_0[31] = input_0[18];
output_0[32] = input_0[18];
output_0[33] = input_0[18];
output_0[34] = input_0[18];
output_0[35] = input_0[18];
output_1[0] = -input_0[0] * input_2[0] + input_0[6] * input_0[13] -
input_0[7] * input_1[0] + pydy_12 + input_3[0];
output_1[1] = -input_0[1] * input_2[1] - input_0[8] * input_1[1] + pydy_12;
output_1[2] = -input_0[2] * input_2[2] - input_0[9] * input_1[2] + pydy_10
+ pydy_11 + pydy_4 + pydy_5 + pydy_6 + pydy_7 + pydy_8 + pydy_9;
output_1[3] = -input_0[3] * input_2[3] - input_0[10] * input_1[3] + pydy_10
+ pydy_11 + pydy_5 + pydy_6 + pydy_7 + pydy_9;
output_1[4] = -input_0[4] * input_2[4] - input_0[11] * input_1[4] + pydy_10
+ pydy_11 + pydy_6 + pydy_7;
output_1[5] = -input_0[5] * input_2[5] - input_0[12] * input_1[5] + pydy_11
+ pydy_7;
}\
"""
header, source = self.generator.doprint()
assert header == expected_header
lines = expected_source.split('\n')
assert source == '\n'.join(lines[:1] + lines[2:])
header, source = self.generator.doprint(prefix=self.prefix)
assert header == expected_header
assert source == expected_source
def test_write(self):
header, source = self.generator.doprint(prefix=self.prefix)
self.generator.write(self.prefix)
with open(self.prefix + '.h') as f:
assert f.read() == header
with open(self.prefix + '.c') as f:
assert f.read() == source
def teardown(self):
if os.path.isfile(self.prefix + '.h'):
os.remove(self.prefix + '.h')
if os.path.isfile(self.prefix + '.c'):
os.remove(self.prefix + '.c')
| 36.253165 | 81 | 0.522857 | 6,340 | 37,232 | 2.796845 | 0.031073 | 0.163772 | 0.06587 | 0.097902 | 0.91552 | 0.89409 | 0.88783 | 0.88704 | 0.869501 | 0.864144 | 0 | 0.163572 | 0.29985 | 37,232 | 1,026 | 82 | 36.288499 | 0.516649 | 0.004781 | 0 | 0.757895 | 0 | 0.05614 | 0.47317 | 0 | 0 | 0 | 0 | 0 | 0.036842 | 1 | 0.019298 | false | 0 | 0.010526 | 0 | 0.031579 | 0.019298 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
83bc24ffcc21915223d66c4047d8bfce3836a880 | 304 | py | Python | models/__init__.py | SSAW14/BeyondtheSpectrum | 0f3492e90d60fe956abc3bfd7ba008ddc5cfd9c2 | [
"MIT"
] | 24 | 2021-06-01T16:47:22.000Z | 2022-03-12T01:52:17.000Z | models/__init__.py | SSAW14/BeyondtheSpectrum | 0f3492e90d60fe956abc3bfd7ba008ddc5cfd9c2 | [
"MIT"
] | 1 | 2021-11-15T15:40:43.000Z | 2021-12-20T06:48:36.000Z | models/__init__.py | SSAW14/BeyondtheSpectrum | 0f3492e90d60fe956abc3bfd7ba008ddc5cfd9c2 | [
"MIT"
] | 2 | 2021-12-03T02:36:08.000Z | 2022-02-16T08:54:16.000Z | from .resnet import get_resnet
from .resnet_cifar import get_cifar_resnet
def get_classification_model(arch, pretrained, **kwargs):
return get_resnet(arch, pretrained, **kwargs)
def get_cifar_classification_model(arch, pretrained, **kwargs):
return get_cifar_resnet(arch, pretrained, **kwargs)
| 33.777778 | 63 | 0.796053 | 40 | 304 | 5.75 | 0.3 | 0.243478 | 0.347826 | 0.286957 | 0.417391 | 0.417391 | 0.417391 | 0 | 0 | 0 | 0 | 0 | 0.111842 | 304 | 8 | 64 | 38 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
83de29751390370d2e1c9606d0d967ff834cf8c5 | 5,571 | py | Python | tests/unit-tests/test_rst_literal.py | BenGale93/confluencebuilder | 93556f974ac482a8d21e95a686fee397d35ed7cd | [
"BSD-2-Clause"
] | null | null | null | tests/unit-tests/test_rst_literal.py | BenGale93/confluencebuilder | 93556f974ac482a8d21e95a686fee397d35ed7cd | [
"BSD-2-Clause"
] | null | null | null | tests/unit-tests/test_rst_literal.py | BenGale93/confluencebuilder | 93556f974ac482a8d21e95a686fee397d35ed7cd | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
:copyright: Copyright 2016-2021 Sphinx Confluence Builder Contributors (AUTHORS)
:license: BSD-2-Clause (LICENSE)
"""
from bs4 import CData
from tests.lib import build_sphinx
from tests.lib import parse
from tests.lib import prepare_conf
import os
import unittest
class TestConfluenceRstLiteral(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.config = prepare_conf()
test_dir = os.path.dirname(os.path.realpath(__file__))
cls.dataset = os.path.join(test_dir, 'datasets', 'common')
def test_storage_rst_literal_blocks(self):
out_dir = build_sphinx(self.dataset, config=self.config,
filenames=['literal-blocks'])
with parse('literal-blocks', out_dir) as data:
code_macros = data.find_all('ac:structured-macro',
{'ac:name': 'code'})
self.assertIsNotNone(code_macros)
self.assertEqual(len(code_macros), 4)
# ensure each code block has cdata content
for code_macro in code_macros:
code_body = code_macro.find('ac:plain-text-body')
cdata_block = next(code_body.children, None)
self.assertIsNotNone(cdata_block)
self.assertTrue(isinstance(cdata_block, CData))
# blocks should by python by default (per Sphinx)
for code_macro in code_macros:
lang = code_macro.find('ac:parameter', {'ac:name': 'language'})
self.assertIsNotNone(lang)
self.assertEqual(lang.text, 'python')
def test_storage_rst_literal_includes(self):
out_dir = build_sphinx(self.dataset, config=self.config,
filenames=['literal-includes'])
with parse('literal-includes', out_dir) as data:
code_macros = data.find_all('ac:structured-macro',
{'ac:name': 'code'})
self.assertIsNotNone(code_macros)
self.assertEqual(len(code_macros), 6)
# ensure each code block has cdata content
for code_macro in code_macros:
code_body = code_macro.find('ac:plain-text-body')
cdata_block = next(code_body.children, None)
self.assertIsNotNone(cdata_block)
self.assertTrue(isinstance(cdata_block, CData))
# c block
block = code_macros.pop(0)
lang = block.find('ac:parameter', {'ac:name': 'language'})
self.assertIsNotNone(lang)
self.assertEqual(lang.text, 'cpp')
linenumbers = block.find('ac:parameter', {'ac:name': 'linenumbers'})
self.assertIsNotNone(linenumbers)
self.assertEqual(linenumbers.text, 'false')
firstline = block.find('ac:parameter', {'ac:name': 'firstline'})
self.assertIsNone(firstline)
# cpp block
block = code_macros.pop(0)
lang = block.find('ac:parameter', {'ac:name': 'language'})
self.assertIsNotNone(lang)
self.assertEqual(lang.text, 'cpp')
linenumbers = block.find('ac:parameter', {'ac:name': 'linenumbers'})
self.assertIsNotNone(linenumbers)
self.assertEqual(linenumbers.text, 'false')
firstline = block.find('ac:parameter', {'ac:name': 'firstline'})
self.assertIsNone(firstline)
# html block
block = code_macros.pop(0)
lang = block.find('ac:parameter', {'ac:name': 'language'})
self.assertIsNotNone(lang)
self.assertEqual(lang.text, 'html/xml')
linenumbers = block.find('ac:parameter', {'ac:name': 'linenumbers'})
self.assertIsNotNone(linenumbers)
self.assertEqual(linenumbers.text, 'true')
firstline = block.find('ac:parameter', {'ac:name': 'firstline'})
self.assertIsNone(firstline)
# java block
block = code_macros.pop(0)
lang = block.find('ac:parameter', {'ac:name': 'language'})
self.assertIsNotNone(lang)
self.assertEqual(lang.text, 'java')
linenumbers = block.find('ac:parameter', {'ac:name': 'linenumbers'})
self.assertIsNotNone(linenumbers)
self.assertEqual(linenumbers.text, 'false')
firstline = block.find('ac:parameter', {'ac:name': 'firstline'})
self.assertIsNone(firstline)
# python block
block = code_macros.pop(0)
lang = block.find('ac:parameter', {'ac:name': 'language'})
self.assertIsNotNone(lang)
self.assertEqual(lang.text, 'python')
linenumbers = block.find('ac:parameter', {'ac:name': 'linenumbers'})
self.assertIsNotNone(linenumbers)
self.assertEqual(linenumbers.text, 'false')
firstline = block.find('ac:parameter', {'ac:name': 'firstline'})
self.assertIsNone(firstline)
# python (lineno-match) block
block = code_macros.pop(0)
lang = block.find('ac:parameter', {'ac:name': 'language'})
self.assertIsNotNone(lang)
self.assertEqual(lang.text, 'python')
linenumbers = block.find('ac:parameter', {'ac:name': 'linenumbers'})
self.assertIsNotNone(linenumbers)
self.assertEqual(linenumbers.text, 'true')
firstline = block.find('ac:parameter', {'ac:name': 'firstline'})
self.assertIsNotNone(firstline)
self.assertEqual(firstline.text, '6')
| 38.157534 | 80 | 0.595584 | 585 | 5,571 | 5.576068 | 0.181197 | 0.038627 | 0.08737 | 0.099019 | 0.813611 | 0.798896 | 0.791539 | 0.791539 | 0.791539 | 0.791539 | 0 | 0.004966 | 0.27715 | 5,571 | 145 | 81 | 38.42069 | 0.805066 | 0.062287 | 0 | 0.742268 | 0 | 0 | 0.148262 | 0 | 0 | 0 | 0 | 0 | 0.42268 | 1 | 0.030928 | false | 0 | 0.061856 | 0 | 0.103093 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
83eb754dc0fc705ef34087edec37b0ce1c721def | 69,824 | py | Python | Midas/test/valetest.py | mihneagogu/Vale | 7e20f6c34f9ffd10ac7ba942ff350be0e0dbc6eb | [
"Apache-2.0"
] | null | null | null | Midas/test/valetest.py | mihneagogu/Vale | 7e20f6c34f9ffd10ac7ba942ff350be0e0dbc6eb | [
"Apache-2.0"
] | null | null | null | Midas/test/valetest.py | mihneagogu/Vale | 7e20f6c34f9ffd10ac7ba942ff350be0e0dbc6eb | [
"Apache-2.0"
] | null | null | null | import unittest
import subprocess
import platform
import os.path
import os
import sys
import shutil
import glob
from typing import Dict, Any, List, Callable
def procrun(args: List[str], **kwargs) -> subprocess.CompletedProcess:
# print("Running: " + " ".join(args))
return subprocess.run(args, capture_output=True, text=True, **kwargs)
PATH_TO_SAMPLES = "../Valestrom/Tests/test/main/resources/"
class ValeTest(unittest.TestCase):
GENPATH: str = os.environ.get('GENPATH', ".")
def valec(self,
module_name: str,
in_filepaths: List[str],
o_files_dir: str,
exe_name: str,
region_override: str,
extra_flags: List[str]) -> subprocess.CompletedProcess:
assert self.GENPATH
python = "python" if self.windows else "python3"
return procrun(
[python,
f"{self.GENPATH}/valec.py",
"build",
module_name,
"--verify",
"--llvmir",
"--census",
"--flares",
"--region-override", region_override,
"--output-dir", o_files_dir,
"--add-exports-include-path",
"-o",
exe_name] + extra_flags + in_filepaths)
def exec(self, exe_file: str) -> subprocess.CompletedProcess:
return procrun([f"./{exe_file}"])
@classmethod
def setUpClass(cls) -> None:
print(
f"Using valec from {cls.GENPATH}. " +
"Set GENPATH env var if this is incorrect",
file=sys.stderr)
def setUp(self) -> None:
self.GENPATH: str = type(self).GENPATH
self.windows = platform.system() == 'Windows'
def compile_and_execute(
self,
in_filepaths: List[str],
region_override: str,
extra_flags: List[str]) -> subprocess.CompletedProcess:
first_vale_filepath = in_filepaths[0]
file_name_without_extension = os.path.splitext(os.path.basename(first_vale_filepath))[0]
build_dir = f"test/test_build/{file_name_without_extension}_build"
module_name = "tmod"
for i in range(0, len(in_filepaths)):
in_filepaths[i] = module_name + ":" + in_filepaths[i]
proc = self.valec(module_name, in_filepaths, build_dir, file_name_without_extension, region_override, extra_flags)
self.assertEqual(proc.returncode, 0,
f"valec couldn't compile {in_filepaths}:\n" +
proc.stdout + "\n" + proc.stderr)
exe_file = f"{build_dir}/{file_name_without_extension}"
proc = self.exec(exe_file)
return proc
def compile_and_execute_and_expect_return_code(
self,
vale_files: List[str],
region_override: str,
expected_return_code: int,
extra_flags: List[str] = None) -> None:
if extra_flags is None:
extra_flags = []
proc = self.compile_and_execute(vale_files, region_override, extra_flags)
# print(proc.stdout)
# print(proc.stderr)
if proc.returncode != expected_return_code:
first_vale_filepath = vale_files[0]
file_name_without_extension = os.path.splitext(os.path.basename(first_vale_filepath))[0]
build_dir = f"test/test_build/{file_name_without_extension}_build"
textfile = open(build_dir + "/stdout.txt", "w")
a = textfile.write(proc.stdout)
textfile.close()
textfile = open(build_dir + "/stderr.txt", "w")
a = textfile.write(proc.stderr)
textfile.close()
self.assertEqual(proc.returncode, expected_return_code,
f"Unexpected result: {proc.returncode}\n" + proc.stdout + proc.stderr)
# Tests for immutables in exports/externs
def test_assist_strlenextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/strlenextern"], "assist", 11)
def test_assist_voidreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/voidreturnexport"], "assist", 42)
def test_assist_structimmparamextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structimmparamextern"], "assist", 42)
def test_assist_structimmparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structimmparamexport"], "assist", 42)
def test_assist_structimmparamdeepextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structimmparamdeepextern"], "assist", 42)
def test_assist_structimmparamdeepexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structimmparamdeepexport"], "assist", 42)
def test_assist_interfaceimmparamextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmparamextern"], "assist", 42)
def test_assist_interfaceimmparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmparamexport"], "assist", 42)
def test_assist_interfaceimmparamdeepextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmparamdeepextern"], "assist", 42)
def test_assist_interfaceimmparamdeepexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmparamdeepexport"], "assist", 42)
def test_assist_rsaimmreturnextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsaimmreturnextern"], "assist", 42)
def test_assist_rsaimmreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsaimmreturnexport"], "assist", 42)
def test_assist_rsaimmparamextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsaimmparamextern"], "assist", 10)
def test_assist_rsaimmparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsaimmparamexport"], "assist", 10)
def test_assist_rsaimmparamdeepextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsaimmparamdeepextern"], "assist", 20)
def test_assist_rsaimmparamdeepexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsaimmparamdeepexport"], "assist", 42)
def test_assist_ssaimmparamextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssaimmparamextern"], "assist", 42)
def test_assist_ssaimmparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssaimmparamexport"], "assist", 42)
def test_assist_ssaimmreturnextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssaimmreturnextern"], "assist", 42)
def test_assist_ssaimmreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssaimmreturnexport"], "assist", 42)
def test_assist_ssaimmparamdeepextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssaimmparamdeepextern"], "assist", 42)
def test_assist_ssaimmparamdeepexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssaimmparamdeepexport"], "assist", 42)
def test_assist_voidreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/voidreturnexport"], "assist", 42)
def test_assist_strreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/strreturnexport"], "assist", 6)
def test_assist_structimmparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structimmparamexport"], "assist", 42)
def test_assist_structimmparamdeepexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structimmparamdeepexport"], "assist", 42)
def test_assist_interfaceimmparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmparamexport"], "assist", 42)
def test_assist_rsaimmparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsaimmparamexport"], "assist", 10)
def test_assist_smallstr(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/smallstr.vale"], "assist", 42)
def test_assist_immtupleaccess(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/tuples/immtupleaccess.vale"], "assist", 42)
def test_assist_ssaimmfromcallable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssaimmfromcallable.vale"], "assist", 42)
def test_assist_ssaimmfromvalues(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssaimmfromvalues.vale"], "assist", 42)
# kldc = known live double check
def test_resilientv3_kldc(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/deadmutstruct.vale"], "resilient-v3", 116, ["--override-known-live-true"])
def test_twinpages(self) -> None:
proc = procrun(["clang", "test/testtwinpages.c", "-o", "test/test_build/testtwinpages"])
self.assertEqual(proc.returncode, 0, f"Twin pages test failed!")
def test_assist_addret(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/addret.vale"], "assist", 7)
def test_assist_add64ret(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/add64ret.vale"], "assist", 42)
def test_assist_floatarithmetic(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/floatarithmetic.vale"], "assist", 42)
def test_assist_floateq(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/floateq.vale"], "assist", 42)
def test_assist_concatstrfloat(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/concatstrfloat.vale"], "assist", 42)
def test_resilientv4_tether(self) -> None:
self.compile_and_execute_and_expect_return_code(["test/tether.vale"], "resilient-v4", 0)
def test_assist_mutswaplocals(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutswaplocals.vale"], "assist", 42)
def test_unsafefast_mutswaplocals(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutswaplocals.vale"], "unsafe-fast", 42)
def test_resilientv4_mutswaplocals(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutswaplocals.vale"], "resilient-v4", 42)
def test_resilientv3_mutswaplocals(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutswaplocals.vale"], "resilient-v3", 42)
def test_naiverc_mutswaplocals(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutswaplocals.vale"], "naive-rc", 42)
def test_assist_rsamutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutreturnexport"], "assist", 42)
def test_unsafefast_rsamutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutreturnexport"], "unsafe-fast", 42)
def test_resilientv4_rsamutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutreturnexport"], "resilient-v4", 42)
def test_resilientv3_rsamutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutreturnexport"], "resilient-v3", 42)
# def test_naiverc_rsamutreturnexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutreturnexport"], "naive-rc", 42)
def test_assist_ssamutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutreturnexport"], "assist", 42)
def test_unsafefast_ssamutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutreturnexport"], "unsafe-fast", 42)
def test_resilientv4_ssamutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutreturnexport"], "resilient-v4", 42)
def test_resilientv3_ssamutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutreturnexport"], "resilient-v3", 42)
# def test_naiverc_ssamutreturnexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutreturnexport"], "naive-rc", 42)
def test_assist_structimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structimm.vale"], "assist", 5)
def test_unsafefast_structimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structimm.vale"], "unsafe-fast", 5)
def test_resilientv4_structimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structimm.vale"], "resilient-v4", 5)
def test_resilientv3_structimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structimm.vale"], "resilient-v3", 5)
def test_naiverc_structimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structimm.vale"], "naive-rc", 5)
def test_assist_memberrefcount(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/memberrefcount.vale"], "assist", 5)
def test_unsafefast_memberrefcount(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/memberrefcount.vale"], "unsafe-fast", 5)
def test_resilientv4_memberrefcount(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/memberrefcount.vale"], "resilient-v4", 5)
def test_resilientv3_memberrefcount(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/memberrefcount.vale"], "resilient-v3", 5)
def test_naiverc_memberrefcount(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/memberrefcount.vale"], "naive-rc", 5)
def test_assist_bigstructimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/bigstructimm.vale"], "assist", 42)
def test_unsafefast_bigstructimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/bigstructimm.vale"], "unsafe-fast", 42)
def test_resilientv4_bigstructimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/bigstructimm.vale"], "resilient-v4", 42)
def test_resilientv3_bigstructimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/bigstructimm.vale"], "resilient-v3", 42)
def test_naiverc_bigstructimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/bigstructimm.vale"], "naive-rc", 42)
def test_assist_structmut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmut.vale"], "assist", 8)
def test_unsafefast_structmut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmut.vale"], "unsafe-fast", 8)
def test_resilientv4_structmut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmut.vale"], "resilient-v4", 8)
def test_resilientv3_structmut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmut.vale"], "resilient-v3", 8)
def test_naiverc_structmut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmut.vale"], "naive-rc", 8)
def test_assist_lambda(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambda.vale"], "assist", 42)
def test_unsafefast_lambda(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambda.vale"], "unsafe-fast", 42)
def test_resilientv4_lambda(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambda.vale"], "resilient-v4", 42)
def test_resilientv3_lambda(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambda.vale"], "resilient-v3", 42)
def test_naiverc_lambda(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambda.vale"], "naive-rc", 42)
def test_assist_if(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/if.vale"], "assist", 42)
def test_unsafefast_if(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/if.vale"], "unsafe-fast", 42)
def test_resilientv4_if(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/if.vale"], "resilient-v4", 42)
def test_resilientv3_if(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/if.vale"], "resilient-v3", 42)
def test_naiverc_if(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/if.vale"], "naive-rc", 42)
def test_assist_upcastif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/upcastif.vale"], "assist", 42)
def test_unsafefast_upcastif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/upcastif.vale"], "unsafe-fast", 42)
def test_resilientv4_upcastif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/upcastif.vale"], "resilient-v4", 42)
def test_resilientv3_upcastif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/upcastif.vale"], "resilient-v3", 42)
def test_naiverc_upcastif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/upcastif.vale"], "naive-rc", 42)
def test_assist_ifnevers(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/ifnevers.vale"], "assist", 42)
def test_unsafefast_ifnevers(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/ifnevers.vale"], "unsafe-fast", 42)
def test_resilientv4_ifnevers(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/ifnevers.vale"], "resilient-v4", 42)
def test_resilientv3_ifnevers(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/ifnevers.vale"], "resilient-v3", 42)
def test_naiverc_ifnevers(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/ifnevers.vale"], "naive-rc", 42)
def test_assist_mutlocal(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutlocal.vale"], "assist", 42)
def test_unsafefast_mutlocal(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutlocal.vale"], "unsafe-fast", 42)
def test_resilientv4_mutlocal(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutlocal.vale"], "resilient-v4", 42)
def test_resilientv3_mutlocal(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutlocal.vale"], "resilient-v3", 42)
def test_naiverc_mutlocal(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/mutlocal.vale"], "naive-rc", 42)
def test_assist_while(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/while/while.vale"], "assist", 42)
def test_unsafefast_while(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/while/while.vale"], "unsafe-fast", 42)
def test_resilientv4_while(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/while/while.vale"], "resilient-v4", 42)
def test_resilientv3_while(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/while/while.vale"], "resilient-v3", 42)
def test_naiverc_while(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/while/while.vale"], "naive-rc", 42)
def test_assist_constraintRef(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/constraintRef.vale"], "assist", 8)
def test_unsafefast_constraintRef(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/constraintRef.vale"], "unsafe-fast", 8)
def test_resilientv4_constraintRef(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/constraintRef.vale"], "resilient-v4", 8)
def test_resilientv3_constraintRef(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/constraintRef.vale"], "resilient-v3", 8)
def test_naiverc_constraintRef(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/constraintRef.vale"], "naive-rc", 8)
def test_assist_ssamutfromcallable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromcallable.vale"], "assist", 42)
def test_unsafefast_ssamutfromcallable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromcallable.vale"], "unsafe-fast", 42)
def test_resilientv4_ssamutfromcallable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromcallable.vale"], "resilient-v4", 42)
def test_resilientv3_ssamutfromcallable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromcallable.vale"], "resilient-v3", 42)
def test_naiverc_ssamutfromcallable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromcallable.vale"], "naive-rc", 42)
def test_assist_ssamutfromvalues(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromvalues.vale"], "assist", 42)
def test_unsafefast_ssamutfromvalues(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromvalues.vale"], "unsafe-fast", 42)
def test_resilientv4_ssamutfromvalues(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromvalues.vale"], "resilient-v4", 42)
def test_resilientv3_ssamutfromvalues(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromvalues.vale"], "resilient-v3", 42)
def test_naiverc_ssamutfromvalues(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/ssamutfromvalues.vale"], "naive-rc", 42)
def test_assist_interfaceimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfaceimm.vale"], "assist", 42)
def test_unsafefast_interfaceimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfaceimm.vale"], "unsafe-fast", 42)
def test_resilientv4_interfaceimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfaceimm.vale"], "resilient-v4", 42)
def test_resilientv3_interfaceimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfaceimm.vale"], "resilient-v3", 42)
def test_naiverc_interfaceimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfaceimm.vale"], "naive-rc", 42)
def test_assist_interfacemut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfacemut.vale"], "assist", 42)
def test_unsafefast_interfacemut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfacemut.vale"], "unsafe-fast", 42)
def test_resilientv4_interfacemut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfacemut.vale"], "resilient-v4", 42)
def test_resilientv3_interfacemut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfacemut.vale"], "resilient-v3", 42)
def test_naiverc_interfacemut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/virtuals/interfacemut.vale"], "naive-rc", 42)
def test_assist_structmutstore(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstore.vale"], "assist", 42)
def test_unsafefast_structmutstore(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstore.vale"], "unsafe-fast", 42)
def test_resilientv4_structmutstore(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstore.vale"], "resilient-v4", 42)
def test_resilientv3_structmutstore(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstore.vale"], "resilient-v3", 42)
def test_naiverc_structmutstore(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstore.vale"], "naive-rc", 42)
def test_assist_structmutstoreinner(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstoreinner.vale"], "assist", 42)
def test_unsafefast_structmutstoreinner(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstoreinner.vale"], "unsafe-fast", 42)
def test_resilientv4_structmutstoreinner(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstoreinner.vale"], "resilient-v4", 42)
def test_resilientv3_structmutstoreinner(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstoreinner.vale"], "resilient-v3", 42)
def test_naiverc_structmutstoreinner(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/structs/structmutstoreinner.vale"], "naive-rc", 42)
def test_assist_rsaimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsaimm.vale"], "assist", 3)
def test_unsafefast_rsaimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsaimm.vale"], "unsafe-fast", 3)
def test_resilientv4_rsaimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsaimm.vale"], "resilient-v4", 3)
def test_resilientv3_rsaimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsaimm.vale"], "resilient-v3", 3)
def test_naiverc_rsaimm(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsaimm.vale"], "naive-rc", 3)
def test_assist_rsamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamut.vale"], "assist", 3)
def test_unsafefast_rsamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamut.vale"], "unsafe-fast", 3)
def test_resilientv4_rsamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamut.vale"], "resilient-v4", 3)
def test_resilientv3_rsamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamut.vale"], "resilient-v3", 3)
def test_naiverc_rsamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamut.vale"], "naive-rc", 3)
def test_assist_interfacemutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutreturnexport"], "assist", 42)
def test_unsafefast_interfacemutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutreturnexport"], "unsafe-fast", 42)
def test_resilientv4_interfacemutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutreturnexport"], "resilient-v4", 42)
def test_resilientv3_interfacemutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutreturnexport"], "resilient-v3", 42)
# def test_naiverc_interfacemutreturnexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutreturnexport"], "naive-rc", 42)
def test_assist_interfacemutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutparamexport"], "assist", 42)
def test_unsafefast_interfacemutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutparamexport"], "unsafe-fast", 42)
def test_resilientv4_interfacemutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutparamexport"], "resilient-v4", 42)
def test_resilientv3_interfacemutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutparamexport"], "resilient-v3", 42)
# def test_naiverc_interfacemutparamexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfacemutparamexport"], "naive-rc", 42)
def test_assist_structmutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutreturnexport"], "assist", 42)
def test_unsafefast_structmutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutreturnexport"], "unsafe-fast", 42)
def test_resilientv4_structmutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutreturnexport"], "resilient-v4", 42)
def test_resilientv3_structmutreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutreturnexport"], "resilient-v3", 42)
# def test_naiverc_structmutreturnexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutreturnexport"], "naive-rc", 42)
def test_assist_structmutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutparamexport"], "assist", 42)
def test_unsafefast_structmutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutparamexport"], "unsafe-fast", 42)
def test_resilientv4_structmutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutparamexport"], "resilient-v4", 42)
def test_resilientv3_structmutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutparamexport"], "resilient-v3", 42)
# def test_naiverc_structmutparamexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/structmutparamexport"], "naive-rc", 42)
def test_assist_rsamutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutparamexport"], "assist", 10)
def test_unsafefast_rsamutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutparamexport"], "unsafe-fast", 10)
def test_resilientv4_rsamutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutparamexport"], "resilient-v4", 10)
def test_resilientv3_rsamutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutparamexport"], "resilient-v3", 10)
# def test_naiverc_rsamutparamexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/rsamutparamexport"], "naive-rc", 10)
def test_assist_ssamutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutparamexport"], "assist", 10)
def test_unsafefast_ssamutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutparamexport"], "unsafe-fast", 10)
def test_resilientv4_ssamutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutparamexport"], "resilient-v4", 10)
def test_resilientv3_ssamutparamexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutparamexport"], "resilient-v3", 10)
# def test_naiverc_ssamutparamexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/ssamutparamexport"], "naive-rc", 10)
def test_assist_rsamutlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamutlen.vale"], "assist", 5)
def test_unsafefast_rsamutlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamutlen.vale"], "unsafe-fast", 5)
def test_resilientv4_rsamutlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamutlen.vale"], "resilient-v4", 5)
def test_resilientv3_rsamutlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamutlen.vale"], "resilient-v3", 5)
def test_naiverc_rsamutlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/rsamutlen.vale"], "naive-rc", 5)
def test_assist_stradd(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/stradd.vale"], "assist", 42)
def test_unsafefast_stradd(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/stradd.vale"], "unsafe-fast", 42)
def test_resilientv4_stradd(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/stradd.vale"], "resilient-v4", 42)
def test_resilientv3_stradd(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/stradd.vale"], "resilient-v3", 42)
def test_naiverc_stradd(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/stradd.vale"], "naive-rc", 42)
def test_assist_strneq(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strneq.vale"], "assist", 42)
def test_unsafefast_strneq(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strneq.vale"], "unsafe-fast", 42)
def test_resilientv4_strneq(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strneq.vale"], "resilient-v4", 42)
def test_resilientv3_strneq(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strneq.vale"], "resilient-v3", 42)
def test_naiverc_strneq(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strneq.vale"], "naive-rc", 42)
def test_assist_lambdamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambdamut.vale"], "assist", 42)
def test_unsafefast_lambdamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambdamut.vale"], "unsafe-fast", 42)
def test_resilientv4_lambdamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambdamut.vale"], "resilient-v4", 42)
def test_resilientv3_lambdamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambdamut.vale"], "resilient-v3", 42)
def test_naiverc_lambdamut(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/lambdas/lambdamut.vale"], "naive-rc", 42)
def test_assist_strprint(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strprint.vale"], "assist", 42)
def test_unsafefast_strprint(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strprint.vale"], "unsafe-fast", 42)
def test_resilientv4_strprint(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strprint.vale"], "resilient-v4", 42)
def test_resilientv3_strprint(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strprint.vale"], "resilient-v3", 42)
def test_naiverc_strprint(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strprint.vale"], "naive-rc", 42)
def test_assist_inttostr(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/inttostr.vale"], "assist", 4)
def test_unsafefast_inttostr(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/inttostr.vale"], "unsafe-fast", 4)
def test_resilientv4_inttostr(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/inttostr.vale"], "resilient-v4", 4)
def test_resilientv3_inttostr(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/inttostr.vale"], "resilient-v3", 4)
def test_naiverc_inttostr(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/inttostr.vale"], "naive-rc", 4)
def test_assist_nestedif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/nestedif.vale"], "assist", 42)
def test_unsafefast_nestedif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/nestedif.vale"], "unsafe-fast", 42)
def test_resilientv4_nestedif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/nestedif.vale"], "resilient-v4", 42)
def test_resilientv3_nestedif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/nestedif.vale"], "resilient-v3", 42)
def test_naiverc_nestedif(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/nestedif.vale"], "naive-rc", 42)
def test_assist_unstackifyret(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unstackifyret.vale"], "assist", 42)
def test_unsafefast_unstackifyret(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unstackifyret.vale"], "unsafe-fast", 42)
def test_resilientv4_unstackifyret(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unstackifyret.vale"], "resilient-v4", 42)
def test_resilientv3_unstackifyret(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unstackifyret.vale"], "resilient-v3", 42)
def test_naiverc_unstackifyret(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unstackifyret.vale"], "naive-rc", 42)
def test_assist_swaprsamutdestroy(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/swaprsamutdestroy.vale"], "assist", 42)
def test_unsafefast_swaprsamutdestroy(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/swaprsamutdestroy.vale"], "unsafe-fast", 42)
def test_resilientv4_swaprsamutdestroy(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/swaprsamutdestroy.vale"], "resilient-v4", 42)
def test_resilientv3_swaprsamutdestroy(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/swaprsamutdestroy.vale"], "resilient-v3", 42)
def test_naiverc_swaprsamutdestroy(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/arrays/swaprsamutdestroy.vale"], "naive-rc", 42)
def test_assist_downcastConstraintSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintSuccessful.vale"], "assist", 42)
def test_unsafefast_downcastConstraintSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintSuccessful.vale"], "unsafe-fast", 42)
def test_resilientv4_downcastConstraintSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintSuccessful.vale"], "resilient-v4", 42)
def test_resilientv3_downcastConstraintSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintSuccessful.vale"], "resilient-v3", 42)
def test_naiverc_downcastConstraintSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintSuccessful.vale"], "naive-rc", 42)
def test_assist_downcastConstraintFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintFailed.vale"], "assist", 42)
def test_unsafefast_downcastConstraintFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintFailed.vale"], "unsafe-fast", 42)
def test_resilientv4_downcastConstraintFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintFailed.vale"], "resilient-v4", 42)
def test_resilientv3_downcastConstraintFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintFailed.vale"], "resilient-v3", 42)
def test_naiverc_downcastConstraintFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastConstraintFailed.vale"], "naive-rc", 42)
def test_assist_downcastOwningSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningSuccessful.vale"], "assist", 42)
def test_unsafefast_downcastOwningSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningSuccessful.vale"], "unsafe-fast", 42)
def test_resilientv4_downcastOwningSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningSuccessful.vale"], "resilient-v4", 42)
def test_resilientv3_downcastOwningSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningSuccessful.vale"], "resilient-v3", 42)
def test_naiverc_downcastOwningSuccessful(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningSuccessful.vale"], "naive-rc", 42)
def test_assist_downcastOwningFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningFailed.vale"], "assist", 42)
def test_unsafefast_downcastOwningFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningFailed.vale"], "unsafe-fast", 42)
def test_resilientv4_downcastOwningFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningFailed.vale"], "resilient-v4", 42)
def test_resilientv3_downcastOwningFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningFailed.vale"], "resilient-v3", 42)
def test_naiverc_downcastOwningFailed(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/downcast/downcastOwningFailed.vale"], "naive-rc", 42)
def test_assist_unreachablemoot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unreachablemoot.vale"], "assist", 42)
def test_unsafefast_unreachablemoot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unreachablemoot.vale"], "unsafe-fast", 42)
def test_resilientv4_unreachablemoot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unreachablemoot.vale"], "resilient-v4", 42)
def test_resilientv3_unreachablemoot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unreachablemoot.vale"], "resilient-v3", 42)
def test_naiverc_unreachablemoot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/unreachablemoot.vale"], "naive-rc", 42)
def test_assist_panic(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panic.vale"], "assist", 1)
def test_unsafefast_panic(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panic.vale"], "unsafe-fast", 1)
def test_resilientv4_panic(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panic.vale"], "resilient-v4", 1)
def test_resilientv3_panic(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panic.vale"], "resilient-v3", 1)
def test_naiverc_panic(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panic.vale"], "naive-rc", 1)
def test_assist_panicnot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panicnot.vale"], "assist", 42)
def test_unsafefast_panicnot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panicnot.vale"], "unsafe-fast", 42)
def test_resilientv4_panicnot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panicnot.vale"], "resilient-v4", 42)
def test_resilientv3_panicnot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panicnot.vale"], "resilient-v3", 42)
def test_naiverc_panicnot(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/panicnot.vale"], "naive-rc", 42)
def test_assist_nestedblocks(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/nestedblocks.vale"], "assist", 42)
def test_unsafefast_nestedblocks(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/nestedblocks.vale"], "unsafe-fast", 42)
def test_resilientv4_nestedblocks(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/nestedblocks.vale"], "resilient-v4", 42)
def test_resilientv3_nestedblocks(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/nestedblocks.vale"], "resilient-v3", 42)
def test_naiverc_nestedblocks(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/nestedblocks.vale"], "naive-rc", 42)
def test_assist_weakDropThenLockStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockStruct.vale"], "assist", 42)
def test_unsafefast_weakDropThenLockStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockStruct.vale"], "unsafe-fast", 42)
def test_resilientv4_weakDropThenLockStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockStruct.vale"], "resilient-v4", 42)
def test_resilientv3_weakDropThenLockStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockStruct.vale"], "resilient-v3", 42)
def test_naiverc_weakDropThenLockStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockStruct.vale"], "naive-rc", 42)
def test_assist_weakLockWhileLiveStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveStruct.vale"], "assist", 7)
def test_unsafefast_weakLockWhileLiveStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveStruct.vale"], "unsafe-fast", 7)
def test_resilientv4_weakLockWhileLiveStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveStruct.vale"], "resilient-v4", 7)
def test_resilientv3_weakLockWhileLiveStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveStruct.vale"], "resilient-v3", 7)
def test_naiverc_weakLockWhileLiveStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveStruct.vale"], "naive-rc", 7)
def test_assist_weakFromLocalCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefStruct.vale"], "assist", 7)
def test_unsafefast_weakFromLocalCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefStruct.vale"], "unsafe-fast", 7)
def test_resilientv4_weakFromLocalCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefStruct.vale"], "resilient-v4", 7)
def test_resilientv3_weakFromLocalCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefStruct.vale"], "resilient-v3", 7)
def test_naiverc_weakFromLocalCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefStruct.vale"], "naive-rc", 7)
def test_assist_weakFromCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefStruct.vale"], "assist", 7)
def test_unsafefast_weakFromCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefStruct.vale"], "unsafe-fast", 7)
def test_resilientv4_weakFromCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefStruct.vale"], "resilient-v4", 7)
def test_resilientv3_weakFromCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefStruct.vale"], "resilient-v3", 7)
def test_naiverc_weakFromCRefStruct(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefStruct.vale"], "naive-rc", 7)
def test_assist_loadFromWeakable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/loadFromWeakable.vale"], "assist", 7)
def test_unsafefast_loadFromWeakable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/loadFromWeakable.vale"], "unsafe-fast", 7)
def test_resilientv4_loadFromWeakable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/loadFromWeakable.vale"], "resilient-v4", 7)
def test_resilientv3_loadFromWeakable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/loadFromWeakable.vale"], "resilient-v3", 7)
def test_naiverc_loadFromWeakable(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/loadFromWeakable.vale"], "naive-rc", 7)
def test_assist_weakDropThenLockInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockInterface.vale"], "assist", 42)
def test_unsafefast_weakDropThenLockInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockInterface.vale"], "unsafe-fast", 42)
def test_resilientv4_weakDropThenLockInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockInterface.vale"], "resilient-v4", 42)
def test_resilientv3_weakDropThenLockInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockInterface.vale"], "resilient-v3", 42)
def test_naiverc_weakDropThenLockInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/dropThenLockInterface.vale"], "naive-rc", 42)
def test_assist_weakLockWhileLiveInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveInterface.vale"], "assist", 7)
def test_unsafefast_weakLockWhileLiveInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveInterface.vale"], "unsafe-fast", 7)
def test_resilientv4_weakLockWhileLiveInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveInterface.vale"], "resilient-v4", 7)
def test_resilientv3_weakLockWhileLiveInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveInterface.vale"], "resilient-v3", 7)
def test_naiverc_weakLockWhileLiveInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/lockWhileLiveInterface.vale"], "naive-rc", 7)
def test_assist_weakFromLocalCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefInterface.vale"], "assist", 7)
def test_unsafefast_weakFromLocalCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefInterface.vale"], "unsafe-fast", 7)
def test_resilientv4_weakFromLocalCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefInterface.vale"], "resilient-v4", 7)
def test_resilientv3_weakFromLocalCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefInterface.vale"], "resilient-v3", 7)
def test_naiverc_weakFromLocalCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromLocalCRefInterface.vale"], "naive-rc", 7)
def test_assist_weakFromCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefInterface.vale"], "assist", 7)
def test_unsafefast_weakFromCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefInterface.vale"], "unsafe-fast", 7)
def test_resilientv4_weakFromCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefInterface.vale"], "resilient-v4", 7)
def test_resilientv3_weakFromCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefInterface.vale"], "resilient-v3", 7)
def test_naiverc_weakFromCRefInterface(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/weakFromCRefInterface.vale"], "naive-rc", 7)
def test_assist_weakSelfMethodCallWhileLive(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodWhileLive.vale"], "assist", 42)
def test_unsafefast_weakSelfMethodCallWhileLive(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodWhileLive.vale"], "unsafe-fast", 42)
def test_resilientv4_weakSelfMethodCallWhileLive(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodWhileLive.vale"], "resilient-v4", 42)
def test_resilientv3_weakSelfMethodCallWhileLive(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodWhileLive.vale"], "resilient-v3", 42)
def test_naiverc_weakSelfMethodCallWhileLive(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodWhileLive.vale"], "naive-rc", 42)
def test_assist_weakSelfMethodCallAfterDrop(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodAfterDrop.vale"], "assist", 0)
def test_unsafefast_weakSelfMethodCallAfterDrop(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodAfterDrop.vale"], "unsafe-fast", 0)
def test_resilientv4_weakSelfMethodCallAfterDrop(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodAfterDrop.vale"], "resilient-v4", 0)
def test_resilientv3_weakSelfMethodCallAfterDrop(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodAfterDrop.vale"], "resilient-v3", 0)
def test_naiverc_weakSelfMethodCallAfterDrop(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/weaks/callWeakSelfMethodAfterDrop.vale"], "naive-rc", 0)
# def test_assist_tupleretextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleretextern"], "assist", 42)
# def test_unsafefast_tupleretextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleretextern"], "unsafe-fast", 42)
# def test_resilientv4_tupleretextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleretextern"], "resilient-v4", 42)
# def test_resilientv3_tupleretextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleretextern"], "resilient-v3", 42)
# def test_naiverc_tupleretextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleretextern"], "naive-rc", 42)
def test_assist_interfaceimmreturnextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnextern"], "assist", 42)
# def test_unsafefast_interfaceimmreturnextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnextern"], "unsafe-fast", 42)
def test_resilientv4_interfaceimmreturnextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnextern"], "resilient-v4", 42)
def test_resilientv3_interfaceimmreturnextern(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnextern"], "resilient-v3", 42)
# def test_naiverc_interfaceimmreturnextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnextern"], "naive-rc", 42)
def test_assist_interfaceimmreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnexport"], "assist", 42)
# def test_unsafefast_interfaceimmreturnexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnexport"], "unsafe-fast", 42)
def test_resilientv4_interfaceimmreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnexport"], "resilient-v4", 42)
def test_resilientv3_interfaceimmreturnexport(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnexport"], "resilient-v3", 42)
# def test_naiverc_interfaceimmreturnexport(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/interfaceimmreturnexport"], "naive-rc", 42)
def test_assist_strlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strlen.vale"], "assist", 12)
def test_unsafefast_strlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strlen.vale"], "unsafe-fast", 12)
def test_resilientv4_strlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strlen.vale"], "resilient-v4", 12)
def test_resilientv3_strlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strlen.vale"], "resilient-v3", 12)
def test_naiverc_strlen(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/strings/strlen.vale"], "naive-rc", 12)
# no assist test: Cant get an invalid access in assist mode, a constraint ref catches it first
def test_unsafefast_invalidaccess(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/invalidaccess.vale"], "unsafe-fast", 14)
def test_resilientv4_invalidaccess(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/invalidaccess.vale"], "resilient-v4", -11)
def test_resilientv3_invalidaccess(self) -> None:
self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/invalidaccess.vale"], "resilient-v3", -11)
# def test_naiverc_invalidaccess(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/invalidaccess.vale"], "naive-rc", 255)
# def test_assist_neverif(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/neverif.vale"], "assist", 42)
# def test_unsafefast_neverif(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/neverif.vale"], "unsafe-fast", 42)
# def test_resilientv4_neverif(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/neverif.vale"], "resilient-v4", 42)
# def test_resilientv3_neverif(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/neverif.vale"], "resilient-v3", 42)
# def test_naiverc_neverif(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/if/neverif.vale"], "naive-rc", 42)
# def test_assist_tupleparamextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleparamextern"], "assist", 42)
# def test_unsafefast_tupleparamextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleparamextern"], "unsafe-fast", 42)
# def test_resilientv4_tupleparamextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleparamextern"], "resilient-v4", 42)
# def test_resilientv3_tupleparamextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleparamextern"], "resilient-v3", 42)
# def test_naiverc_tupleparamextern(self) -> None:
# self.compile_and_execute_and_expect_return_code([PATH_TO_SAMPLES + "programs/externs/tupleparamextern"], "naive-rc", 42)
if __name__ == '__main__':
unittest.main()
| 77.755011 | 167 | 0.763949 | 8,732 | 69,824 | 5.692625 | 0.032982 | 0.071819 | 0.121751 | 0.142432 | 0.930233 | 0.916895 | 0.865173 | 0.780438 | 0.780438 | 0.780438 | 0 | 0.014542 | 0.12543 | 69,824 | 897 | 168 | 77.841695 | 0.799463 | 0.074244 | 0 | 0.04717 | 0 | 0 | 0.228704 | 0.17229 | 0 | 0 | 0 | 0 | 0.005391 | 1 | 0.448787 | false | 0 | 0.012129 | 0.002695 | 0.469003 | 0.014825 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
83eb8ed2d59b42422ea9d4056c01b30a5f4cfa3b | 13,186 | py | Python | tests/draw/svg/test_gradients.py | LikuraGong/WeasyPrint | 5fe7ef6b22ae1265a52dee422e1c3b6777519ca3 | [
"BSD-3-Clause"
] | null | null | null | tests/draw/svg/test_gradients.py | LikuraGong/WeasyPrint | 5fe7ef6b22ae1265a52dee422e1c3b6777519ca3 | [
"BSD-3-Clause"
] | null | null | null | tests/draw/svg/test_gradients.py | LikuraGong/WeasyPrint | 5fe7ef6b22ae1265a52dee422e1c3b6777519ca3 | [
"BSD-3-Clause"
] | null | null | null | """
weasyprint.tests.test_draw.svg.test_gradients
------------------------------------------
Test how SVG simple gradients are drawn.
"""
import pytest
from ...testing_utils import assert_no_logs
from .. import assert_pixels
@assert_no_logs
def test_linear_gradient():
assert_pixels('linear_gradient', 10, 10, '''
BBBBBBBBBB
BBBBBBBBBB
BBBBBBBBBB
BBBBBBBBBB
BBBBBBBBBB
RRRRRRRRRR
RRRRRRRRRR
RRRRRRRRRR
RRRRRRRRRR
RRRRRRRRRR
''', '''
<style>
@page { size: 10px }
svg { display: block }
</style>
<svg width="10px" height="10px" xmlns="http://www.w3.org/2000/svg">
<defs>
<linearGradient id="grad" x1="0" y1="0" x2="0" y2="1"
gradientUnits="objectBoundingBox">
<stop stop-color="blue" offset="50%"></stop>
<stop stop-color="red" offset="50%"></stop>
</linearGradient>
</defs>
<rect x="0" y="0" width="10" height="10" fill="url(#grad)" />
</svg>
''')
@assert_no_logs
def test_linear_gradient_userspace():
assert_pixels('linear_gradient_userspace', 10, 10, '''
BBBBBBBBBB
BBBBBBBBBB
BBBBBBBBBB
BBBBBBBBBB
BBBBBBBBBB
RRRRRRRRRR
RRRRRRRRRR
RRRRRRRRRR
RRRRRRRRRR
RRRRRRRRRR
''', '''
<style>
@page { size: 10px }
svg { display: block }
</style>
<svg width="10px" height="10px" xmlns="http://www.w3.org/2000/svg">
<defs>
<linearGradient id="grad" x1="0" y1="0" x2="0" y2="10"
gradientUnits="userSpaceOnUse">
<stop stop-color="blue" offset="50%"></stop>
<stop stop-color="red" offset="50%"></stop>
</linearGradient>
</defs>
<rect x="0" y="0" width="10" height="10" fill="url(#grad)" />
</svg>
''')
@assert_no_logs
def test_linear_gradient_multicolor():
assert_pixels('linear_gradient_multicolor', 10, 8, '''
BBBBBBBBBB
BBBBBBBBBB
RRRRRRRRRR
RRRRRRRRRR
GGGGGGGGGG
GGGGGGGGGG
vvvvvvvvvv
vvvvvvvvvv
''', '''
<style>
@page { size: 10px 8px }
svg { display: block }
</style>
<svg width="10px" height="8px" xmlns="http://www.w3.org/2000/svg">
<defs>
<linearGradient id="grad" x1="0" y1="0" x2="0" y2="1"
gradientUnits="objectBoundingBox">
<stop stop-color="blue" offset="25%"></stop>
<stop stop-color="red" offset="25%"></stop>
<stop stop-color="red" offset="50%"></stop>
<stop stop-color="lime" offset="50%"></stop>
<stop stop-color="lime" offset="75%"></stop>
<stop stop-color="rgb(128,0,128)" offset="75%"></stop>
</linearGradient>
</defs>
<rect x="0" y="0" width="10" height="8" fill="url(#grad)" />
</svg>
''')
@assert_no_logs
def test_linear_gradient_multicolor_userspace():
assert_pixels('linear_gradient_multicolor_userspace', 10, 8, '''
BBBBBBBBBB
BBBBBBBBBB
RRRRRRRRRR
RRRRRRRRRR
GGGGGGGGGG
GGGGGGGGGG
vvvvvvvvvv
vvvvvvvvvv
''', '''
<style>
@page { size: 10px 8px }
svg { display: block }
</style>
<svg width="10px" height="8px" xmlns="http://www.w3.org/2000/svg">
<defs>
<linearGradient id="grad" x1="0" y1="0" x2="0" y2="8"
gradientUnits="userSpaceOnUse">
<stop stop-color="blue" offset="25%"></stop>
<stop stop-color="red" offset="25%"></stop>
<stop stop-color="red" offset="50%"></stop>
<stop stop-color="lime" offset="50%"></stop>
<stop stop-color="lime" offset="75%"></stop>
<stop stop-color="rgb(128,0,128)" offset="75%"></stop>
</linearGradient>
</defs>
<rect x="0" y="0" width="10" height="8" fill="url(#grad)" />
</svg>
''')
@assert_no_logs
def test_linear_gradient_transform():
assert_pixels('linear_gradient_transform', 10, 8, '''
BBBBBBBBBB
RRRRRRRRRR
GGGGGGGGGG
vvvvvvvvvv
vvvvvvvvvv
vvvvvvvvvv
vvvvvvvvvv
vvvvvvvvvv
''', '''
<style>
@page { size: 10px 8px}
svg { display: block }
</style>
<svg width="10px" height="8px" xmlns="http://www.w3.org/2000/svg">
<defs>
<linearGradient id="grad" x1="0" y1="0" x2="0" y2="1"
gradientUnits="objectBoundingBox" gradientTransform="scale(0.5)">
<stop stop-color="blue" offset="25%"></stop>
<stop stop-color="red" offset="25%"></stop>
<stop stop-color="red" offset="50%"></stop>
<stop stop-color="lime" offset="50%"></stop>
<stop stop-color="lime" offset="75%"></stop>
<stop stop-color="rgb(128,0,128)" offset="75%"></stop>
</linearGradient>
</defs>
<rect x="0" y="0" width="10" height="8" fill="url(#grad)" />
</svg>
''')
@pytest.mark.xfail
@assert_no_logs
def test_linear_gradient_repeat():
assert_pixels('linear_gradient_repeat', 10, 8, '''
BBBBBBBBBB
RRRRRRRRRR
GGGGGGGGGG
vvvvvvvvvv
BBBBBBBBBB
RRRRRRRRRR
GGGGGGGGGG
vvvvvvvvvv
''', '''
<style>
@page { size: 10px 8px }
svg { display: block }
</style>
<svg width="10px" height="8px" xmlns="http://www.w3.org/2000/svg">
<defs>
<linearGradient id="grad" x1="0" y1="0" x2="0" y2="1"
gradientUnits="objectBoundingBox" spreadMethod="repeat">
<stop stop-color="blue" offset="25%"></stop>
<stop stop-color="red" offset="25%"></stop>
<stop stop-color="red" offset="50%"></stop>
<stop stop-color="lime" offset="50%"></stop>
<stop stop-color="lime" offset="75%"></stop>
<stop stop-color="rgb(128,0,128)" offset="75%"></stop>
</linearGradient>
</defs>
<rect x="0" y="0" width="10" height="8" fill="url(#grad)" />
</svg>
''')
@pytest.mark.xfail
@assert_no_logs
def test_linear_gradient_reflect():
assert_pixels('linear_gradient_reflect', 10, 8, '''
BBBBBBBBBB
RRRRRRRRRR
GGGGGGGGGG
vvvvvvvvvv
vvvvvvvvvv
GGGGGGGGGG
RRRRRRRRRR
BBBBBBBBBB
''', '''
<style>
@page { size: 10px 8px }
svg { display: block }
</style>
<svg width="10px" height="8px" xmlns="http://www.w3.org/2000/svg">
<defs>
<linearGradient id="grad" x1="0" y1="0" x2="0" y2="0.5"
gradientUnits="objectBoundingBox" spreadMethod="reflect">
<stop stop-color="blue" offset="25%"></stop>
<stop stop-color="red" offset="25%"></stop>
<stop stop-color="red" offset="50%"></stop>
<stop stop-color="lime" offset="50%"></stop>
<stop stop-color="lime" offset="75%"></stop>
<stop stop-color="rgb(128,0,128)" offset="75%"></stop>
</linearGradient>
</defs>
<rect x="0" y="0" width="10" height="8" fill="url(#grad)" />
</svg>
''')
@assert_no_logs
def test_radial_gradient():
assert_pixels('radial_gradient', 10, 10, '''
rrrrrrrrrr
rrrrrrrrrr
rrrrBBrrrr
rrrBBBBrrr
rrBBBBBBrr
rrBBBBBBrr
rrrBBBBrrr
rrrrBBrrrr
rrrrrrrrrr
rrrrrrrrrr
''', '''
<style>
@page { size: 10px }
svg { display: block }
</style>
<svg width="10px" height="10px" xmlns="http://www.w3.org/2000/svg">
<defs>
<radialGradient id="grad" cx="0.5" cy="0.5" r="0.5"
fx="0.5" fy="0.5" fr="0.2"
gradientUnits="objectBoundingBox">
<stop stop-color="blue" offset="25%"></stop>
<stop stop-color="red" offset="25%"></stop>
</radialGradient>
</defs>
<rect x="0" y="0" width="10" height="10" fill="url(#grad)" />
</svg>
''')
@assert_no_logs
def test_radial_gradient_userspace():
assert_pixels('radial_gradient_userspace', 10, 10, '''
rrrrrrrrrr
rrrrrrrrrr
rrrrBBrrrr
rrrBBBBrrr
rrBBBBBBrr
rrBBBBBBrr
rrrBBBBrrr
rrrrBBrrrr
rrrrrrrrrr
rrrrrrrrrr
''', '''
<style>
@page { size: 10px }
svg { display: block }
</style>
<svg width="10px" height="10px" xmlns="http://www.w3.org/2000/svg">
<defs>
<radialGradient id="grad" cx="5" cy="5" r="5" fx="5" fy="5" fr="2"
gradientUnits="userSpaceOnUse">
<stop stop-color="blue" offset="25%"></stop>
<stop stop-color="red" offset="25%"></stop>
</radialGradient>
</defs>
<rect x="0" y="0" width="10" height="10" fill="url(#grad)" />
</svg>
''')
@assert_no_logs
def test_radial_gradient_multicolor():
assert_pixels('radial_gradient_multicolor', 10, 10, '''
rrrrrrrrrr
rrrGGGGrrr
rrGGBBGGrr
rGGBBBBGGr
rGBBBBBBGr
rGBBBBBBGr
rGGBBBBGGr
rrGGBBGGrr
rrrGGGGrrr
rrrrrrrrrr
''', '''
<style>
@page { size: 10px }
svg { display: block }
</style>
<svg width="10px" height="10px" xmlns="http://www.w3.org/2000/svg">
<defs>
<radialGradient id="grad" cx="0.5" cy="0.5" r="0.5"
fx="0.5" fy="0.5" fr="0.2"
gradientUnits="objectBoundingBox">
<stop stop-color="blue" offset="33%"></stop>
<stop stop-color="lime" offset="33%"></stop>
<stop stop-color="lime" offset="66%"></stop>
<stop stop-color="red" offset="66%"></stop>
</radialGradient>
</defs>
<rect x="0" y="0" width="10" height="10" fill="url(#grad)" />
</svg>
''')
@assert_no_logs
def test_radial_gradient_multicolor_userspace():
assert_pixels('radial_gradient_multicolor_userspace', 10, 10, '''
rrrrrrrrrr
rrrGGGGrrr
rrGGBBGGrr
rGGBBBBGGr
rGBBBBBBGr
rGBBBBBBGr
rGGBBBBGGr
rrGGBBGGrr
rrrGGGGrrr
rrrrrrrrrr
''', '''
<style>
@page { size: 10px }
svg { display: block }
</style>
<svg width="10px" height="10px" xmlns="http://www.w3.org/2000/svg">
<defs>
<radialGradient id="grad" cx="5" cy="5" r="5"
fx="5" fy="5" fr="2"
gradientUnits="userSpaceOnUse">
<stop stop-color="blue" offset="33%"></stop>
<stop stop-color="lime" offset="33%"></stop>
<stop stop-color="lime" offset="66%"></stop>
<stop stop-color="red" offset="66%"></stop>
</radialGradient>
</defs>
<rect x="0" y="0" width="10" height="10" fill="url(#grad)" />
</svg>
''')
@pytest.mark.xfail
@assert_no_logs
def test_radial_gradient_repeat():
assert_pixels('radial_gradient_repeat', 10, 10, '''
GBrrrrrrBG
BrrGGGGrrB
rrGGBBGGrr
rGGBBBBGGr
rGBBBBBBGr
rGBBBBBBGr
rGGBBBBGGr
rrGGBBGGrr
BrrGGGGrrB
GBrrrrrrBG
''', '''
<style>
@page { size: 10px }
svg { display: block }
</style>
<svg width="10px" height="10px" xmlns="http://www.w3.org/2000/svg">
<defs>
<radialGradient id="grad" cx="0.5" cy="0.5" r="0.5"
fx="0.5" fy="0.5" fr="0.2"
gradientUnits="objectBoundingBox" spreadMethod="repeat">
<stop stop-color="blue" offset="33%"></stop>
<stop stop-color="lime" offset="33%"></stop>
<stop stop-color="lime" offset="66%"></stop>
<stop stop-color="red" offset="66%"></stop>
</radialGradient>
</defs>
<rect x="0" y="0" width="10" height="10" fill="url(#grad)" />
</svg>
''')
@pytest.mark.xfail
@assert_no_logs
def test_radial_gradient_reflect():
assert_pixels('radial_gradient_reflect', 10, 10, '''
BGrrrrrrGB
GrrGGGGrrG
rrGGBBGGrr
rGGBBBBGGr
rGBBBBBBGr
rGBBBBBBGr
rGGBBBBGGr
rrGGBBGGrr
GrrGGGGrrG
BGrrrrrrGB
''', '''
<style>
@page { size: 10px }
svg { display: block }
</style>
<svg width="10px" height="10px" xmlns="http://www.w3.org/2000/svg">
<defs>
<radialGradient id="grad" cx="0.5" cy="0.5" r="0.5"
fx="0.5" fy="0.5" fr="0.2"
gradientUnits="objectBoundingBox" spreadMethod="reflect">
<stop stop-color="blue" offset="33%"></stop>
<stop stop-color="lime" offset="33%"></stop>
<stop stop-color="lime" offset="66%"></stop>
<stop stop-color="red" offset="66%"></stop>
</radialGradient>
</defs>
<rect x="0" y="0" width="10" height="10" fill="url(#grad)" />
</svg>
''')
| 29.765237 | 77 | 0.525027 | 1,450 | 13,186 | 4.698621 | 0.072414 | 0.111551 | 0.103038 | 0.102304 | 0.909731 | 0.875679 | 0.851754 | 0.827389 | 0.827389 | 0.815647 | 0 | 0.057348 | 0.304414 | 13,186 | 442 | 78 | 29.832579 | 0.685456 | 0.009859 | 0 | 0.911548 | 0 | 0.093366 | 0.886383 | 0.074415 | 0 | 0 | 0 | 0 | 0.068796 | 1 | 0.031941 | true | 0 | 0.007371 | 0 | 0.039312 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
8607789eec747a61d83f423e88ddbf84305d1cac | 45,422 | py | Python | utils/data.py | Shun-Gan/Adaptive-Driver-Attention-ADA-model | a780f02082cc7557dc03c59d3c7406ff778298de | [
"MIT"
] | 1 | 2022-02-14T07:13:31.000Z | 2022-02-14T07:13:31.000Z | utils/data.py | Shun-Gan/Adaptive-Driver-Attention-ADA-model | a780f02082cc7557dc03c59d3c7406ff778298de | [
"MIT"
] | null | null | null | utils/data.py | Shun-Gan/Adaptive-Driver-Attention-ADA-model | a780f02082cc7557dc03c59d3c7406ff778298de | [
"MIT"
] | 1 | 2022-02-14T07:13:33.000Z | 2022-02-14T07:13:33.000Z |
from pathlib import Path
import os
import random
import json
import itertools
import copy
import torch
from torch.utils.data import Dataset, DataLoader, BatchSampler, RandomSampler, \
SequentialSampler
from torchvision import transforms
import numpy as np
import cv2
import PIL
import scipy.io
import glob
from . import utils
default_data_dir = Path(__file__).resolve().parent.parent / "data"
# Set default paths
if "DReye" not in os.environ:
os.environ["DReye_DATA_DIR"] = str(default_data_dir / "New_DReye")
if "DADA2000_DATA_DIR" not in os.environ:
os.environ["DADA2000_DATA_DIR"] = str(default_data_dir / "DADA")
if "DT16_DATA_DIR" not in os.environ:
os.environ["DT16_DATA_DIR"] = str(default_data_dir / "DT16")
if "BDDA_DATA_DIR" not in os.environ:
os.environ["BDDA_DATA_DIR"] = str(default_data_dir / "BDDA")
config_path = Path(__file__).resolve().parent / "cache"
# os.environ["DADA2000_DATA_DIR"] = "/media/acl/7A4A85A74A85612D/01_Driver_Gaze/TASED_Net_DADA/data"
def get_dataloader(src='DHF1K'):
if src in ('MIT1003',):
return ImgSizeDataLoader
return DataLoader
class ImgSizeBatchSampler:
def __init__(self, dataset, batch_size=1, shuffle=False, drop_last=False):
assert(isinstance(dataset, MIT1003Dataset))
self.batch_size = batch_size
self.shuffle = shuffle
self.drop_last = drop_last
out_size_array = [
dataset.size_dict[img_idx]['out_size']
for img_idx in dataset.samples]
self.out_size_set = sorted(list(set(out_size_array)))
self.sample_idx_dict = {
out_size: [] for out_size in self.out_size_set}
for sample_idx, img_idx in enumerate(dataset.samples):
self.sample_idx_dict[dataset.size_dict[img_idx]['out_size']].append(
sample_idx)
self.len = 0
self.n_batches_dict = {}
for out_size, sample_idx_array in self.sample_idx_dict.items():
this_n_batches = len(sample_idx_array) // self.batch_size
self.len += this_n_batches
self.n_batches_dict[out_size] = this_n_batches
def __iter__(self):
batch_array = list(itertools.chain.from_iterable(
[out_size for _ in range(n_batches)]
for out_size, n_batches in self.n_batches_dict.items()))
if not self.shuffle:
random.seed(27)
random.shuffle(batch_array)
this_sample_idx_dict = copy.deepcopy(self.sample_idx_dict)
for sample_idx_array in this_sample_idx_dict.values():
random.shuffle(sample_idx_array)
for out_size in batch_array:
this_indices = this_sample_idx_dict[out_size][:self.batch_size]
del this_sample_idx_dict[out_size][:self.batch_size]
yield this_indices
def __len__(self):
return self.len
class ImgSizeDataLoader(DataLoader):
def __init__(self, dataset, batch_size=1, shuffle=False, drop_last=False,
**kwargs):
if batch_size == 1:
if shuffle:
sampler = RandomSampler(dataset)
else:
sampler = SequentialSampler(dataset)
batch_sampler = BatchSampler(sampler, batch_size, drop_last)
else:
batch_sampler = ImgSizeBatchSampler(
dataset, batch_size=batch_size, shuffle=shuffle,
drop_last=drop_last)
super().__init__(dataset, batch_sampler=batch_sampler, **kwargs)
def get_optimal_out_size(img_size):
ar = img_size[0] / img_size[1]
min_prod = 100
max_prod = 120
ar_array = []
size_array = []
for n1 in range(7, 14):
for n2 in range(7, 14):
if min_prod <= n1 * n2 <= max_prod:
this_ar = n1 / n2
this_ar_ratio = min((ar, this_ar)) / max((ar, this_ar))
ar_array.append(this_ar_ratio)
size_array.append((n1, n2))
max_ar_ratio_idx = np.argmax(np.array(ar_array)).item()
bn_size = size_array[max_ar_ratio_idx]
out_size = tuple(r * 32 for r in bn_size)
return out_size
class FolderVideoDataset(Dataset):
def __init__(self, images_path, frame_modulo=None, source=None):
self.images_path = images_path
self.frame_modulo = frame_modulo or 5
self.preproc_cfg = {
'rgb_mean': (0.485, 0.456, 0.406),
'rgb_std': (0.229, 0.224, 0.225),
}
frame_files = sorted(list(images_path.glob("*")))
frame_files = [file for file in frame_files
if file.suffix in ('.png', '.jpg', '.jpeg')]
self.frame_files = frame_files
self.vid_nr_array = [0]
self.n_images_dict = {0: len(frame_files)}
img = cv2.imread(str(frame_files[0]))
img_size = tuple(img.shape[:2])
self.target_size_dict = {0: img_size}
if source == 'DHF1K' and img_size == (360, 640):
self.out_size = (224, 384)
elif source == 'Hollywood':
self.out_size = (224, 416)
elif source == 'UCFSports':
self.out_size = (256, 384)
else:
self.out_size = get_optimal_out_size(img_size)
def load_frame(self, f_nr):
frame_file = self.frame_files[f_nr - 1]
frame = cv2.imread(str(frame_file))
if frame is None:
raise FileNotFoundError(frame_file)
frame = np.ascontiguousarray(frame[:, :, ::-1])
return frame
def preprocess_sequence(self, frame_seq):
transformations = []
transformations.append(transforms.ToPILImage())
transformations.append(transforms.Resize(
self.out_size, interpolation=PIL.Image.LANCZOS))
transformations.append(transforms.ToTensor())
if 'rgb_mean' in self.preproc_cfg:
transformations.append(
transforms.Normalize(
self.preproc_cfg['rgb_mean'], self.preproc_cfg['rgb_std']))
processing = transforms.Compose(transformations)
tensor = [processing(img) for img in frame_seq]
tensor = torch.stack(tensor)
return tensor
def get_data(self, vid_nr, start):
n_images = self.n_images_dict[vid_nr]
frame_nrs = list(range(start, n_images + 1, self.frame_modulo))
frame_seq = [self.load_frame(f_nr) for f_nr in frame_nrs]
frame_seq = self.preprocess_sequence(frame_seq)
target_size = self.target_size_dict[vid_nr]
return frame_nrs, frame_seq, target_size
def __len__(self):
return len(self.samples)
def __getitem__(self, item):
return self.get_data(item, 0)
class FolderImageDataset(Dataset):
def __init__(self, images_path):
self.images_path = images_path
self.frame_modulo = 1
self.preproc_cfg = {
'rgb_mean': (0.485, 0.456, 0.406),
'rgb_std': (0.229, 0.224, 0.225),
}
image_files = sorted(list(images_path.glob("*")))
image_files = [file for file in image_files
if file.suffix in ('.png', '.jpg', '.jpeg')]
self.image_files = image_files
self.n_images_dict = {
img_idx: 1 for img_idx in range(len(self.image_files))}
self.target_size_dict = {}
self.out_size_dict = {}
for img_idx, file in enumerate(image_files):
img = cv2.imread(str(file))
img_size = tuple(img.shape[:2])
self.target_size_dict[img_idx] = img_size
self.out_size_dict[img_idx] = get_optimal_out_size(img_size)
def load_image(self, img_idx):
image_file = self.image_files[img_idx]
image = cv2.imread(str(image_file))
if image is None:
raise FileNotFoundError(image_file)
image = np.ascontiguousarray(image[:, :, ::-1])
return image
def preprocess(self, img, out_size):
transformations = [
transforms.ToPILImage(),
transforms.Resize(
out_size, interpolation=PIL.Image.LANCZOS),
transforms.ToTensor(),
]
if 'rgb_mean' in self.preproc_cfg:
transformations.append(
transforms.Normalize(
self.preproc_cfg['rgb_mean'], self.preproc_cfg['rgb_std']))
processing = transforms.Compose(transformations)
tensor = processing(img)
return tensor
def get_data(self, img_idx):
file = self.image_files[img_idx]
img = cv2.imread(str(file))
assert (img is not None)
img = np.ascontiguousarray(img[:, :, ::-1])
out_size = self.out_size_dict[img_idx]
img = self.preprocess(img, out_size)
return [1], img, self.target_size_dict[img_idx]
def __len__(self):
return len(self.image_files)
def __getitem__(self, item):
return self.get_data(item, 0)
###
class DReyeDataset(Dataset, utils.KwConfigClass):
img_channels = 1
n_train_val_videos = 405 # 570
test_vid_nrs = (406, 780) #1110
frame_rate = 24 # note video 25fps and modify frame_modulo=4
source = 'DReye'
dynamic = True
def __init__(self,
seq_len=12,
frame_modulo=4,
max_seq_len=1e6,
preproc_cfg=None,
out_size=(224, 384), phase='train', target_size=(360, 640),
debug=False, val_size=27, n_x_val=3, x_val_step=2,
x_val_seed=0, seq_per_vid=1, subset=None, verbose=1,
n_images_file='DReye_n_images.dat', seq_per_vid_val=2,
sal_offset=None):
self.phase = phase
self.train = phase == 'train'
if not self.train:
preproc_cfg = {}
elif preproc_cfg is None:
preproc_cfg = {}
preproc_cfg.update({
'rgb_mean': (0.485, 0.456, 0.406),
'rgb_std': (0.229, 0.224, 0.225),
})
self.preproc_cfg = preproc_cfg
self.out_size = out_size
self.debug = debug
self.val_size = val_size
self.n_x_val = n_x_val
self.x_val_step = x_val_step
self.x_val_seed = x_val_seed
self.seq_len = seq_len
self.seq_per_vid = seq_per_vid
self.seq_per_vid_val = seq_per_vid_val
self.frame_modulo = frame_modulo
self.clip_len = seq_len * frame_modulo
self.subset = subset
self.verbose = verbose
self.n_images_file = n_images_file
self.target_size = target_size
self.sal_offset = sal_offset
self.max_seq_len = max_seq_len
self._dir = None
self._n_images_dict = None
self.vid_nr_array = None
# Evaluation
if phase in ('eval', 'test'):
self.seq_len = int(1e6)
if self.phase in ('test',):
self.vid_nr_array = list(range(
self.test_vid_nrs[0], self.test_vid_nrs[1] + 1))
self.samples, self.target_size_dict = self.prepare_samples()
return
# Cross-validation split
n_videos = self.n_train_val_videos
assert(self.val_size <= n_videos // self.n_x_val)
assert(self.x_val_step < self.n_x_val)
vid_nr_array = np.arange(1, n_videos + 1)
if self.x_val_seed > 0:
np.random.seed(self.x_val_seed)
np.random.shuffle(vid_nr_array)
val_start = (len(vid_nr_array) - self.val_size) //\
(self.n_x_val - 1) * self.x_val_step
vid_nr_array = vid_nr_array.tolist()
if not self.train:
self.vid_nr_array =\
vid_nr_array[val_start:val_start + self.val_size]
else:
del vid_nr_array[val_start:val_start + self.val_size]
self.vid_nr_array = vid_nr_array
if self.subset is not None:
self.vid_nr_array =\
self.vid_nr_array[:int(len(self.vid_nr_array) * self.subset)]
self.samples, self.target_size_dict = self.prepare_samples()
@property
def n_images_dict(self):
if self._n_images_dict is None:
with open(config_path.parent / self.n_images_file, 'r') as f:
self._n_images_dict = {
idx + 1: int(line) for idx, line in enumerate(f)
if idx + 1 in self.vid_nr_array}
return self._n_images_dict
@property
def dir(self):
if self._dir is None:
self._dir = Path(os.environ["DReye_DATA_DIR"])
return self._dir
@property
def n_samples(self):
return len(self.vid_nr_array)
def __len__(self):
return len(self.samples)
def prepare_samples(self):
samples = []
too_short = 0
too_long = 0
for vid_nr, n_images in self.n_images_dict.items():
if self.phase in ('eval', 'test'):
samples += [
(vid_nr, offset + 1) for offset in range(self.frame_modulo)]
continue
# 帧数过小多大直接跳过
if n_images < self.clip_len:
too_short += 1
continue
if n_images // self.frame_modulo > self.max_seq_len:
too_long += 1
continue
#
if self.phase == 'train':
samples += [(vid_nr, None)] * self.seq_per_vid
continue
elif self.phase == 'valid':
x = n_images // (self.seq_per_vid_val * 2) - self.clip_len // 2
start = max(1, x)
end = min(n_images - self.clip_len, n_images - x)
samples += [
(vid_nr, int(start)) for start in
np.linspace(start, end, self.seq_per_vid_val)]
continue
# 打印数据集加载的基本信息
if self.phase not in ('eval', 'test') and self.n_images_dict:
n_loaded = len(self.n_images_dict) - too_short - too_long
print(f"{n_loaded} videos loaded "
f"({n_loaded / len(self.n_images_dict) * 100:.1f}%)")
print(f"{too_short} videos are too short "
f"({too_short / len(self.n_images_dict) * 100:.1f}%)")
print(f"{too_long} videos are too long "
f"({too_long / len(self.n_images_dict) * 100:.1f}%)")
target_size_dict = {
vid_nr: self.target_size for vid_nr in self.n_images_dict.keys()}
return samples, target_size_dict
def get_frame_nrs(self, vid_nr, start):
n_images = self.n_images_dict[vid_nr]
if self.phase in ('eval', 'test'):
return list(range(start, n_images + 1, self.frame_modulo))
return list(range(start, start + self.clip_len, self.frame_modulo))
def get_data_file(self, vid_nr, f_nr, dkey):
if dkey == 'frame':
folder = 'images'
elif dkey == 'sal':
folder = 'new_maps'
elif dkey == 'fix':
folder = 'fixation'
else:
raise ValueError(f'Unknown data key {dkey}')
###
img_path = str(self.dir / f'{vid_nr:04d}' / folder/ f'{f_nr:04d}.png')
return img_path
def load_data(self, vid_nr, f_nr, dkey):
read_flag = None if dkey == 'frame' else cv2.IMREAD_GRAYSCALE
data_file = self.get_data_file(vid_nr, f_nr, dkey)
if read_flag is not None:
data = cv2.imread(str(data_file), read_flag)
else:
data = cv2.imread(str(data_file))
if data is None:
raise FileNotFoundError(data_file)
if dkey == 'frame':
data = np.ascontiguousarray(data[:, :, ::-1])
if dkey == 'sal' and self.train and self.sal_offset is not None:
data += self.sal_offset
data[0, 0] = 0
return data
def preprocess_sequence(self, frame_seq, dkey, vid_nr):
transformations = []
if dkey == 'frame':
transformations.append(transforms.ToPILImage())
transformations.append(transforms.Resize(
self.out_size, interpolation=PIL.Image.LANCZOS))
transformations.append(transforms.ToTensor())
if dkey == 'frame' and 'rgb_mean' in self.preproc_cfg:
transformations.append(
transforms.Normalize(
self.preproc_cfg['rgb_mean'], self.preproc_cfg['rgb_std']))
elif dkey == 'sal':
transformations.append(transforms.Lambda(utils.normalize_tensor))
# elif dkey == 'fix':
# transformations.append(
# transforms.Lambda(lambda fix: torch.gt(fix, 0.5)))
##!
processing = transforms.Compose(transformations)
tensor = [processing(img) for img in frame_seq]
tensor = torch.stack(tensor)
return tensor
def get_seq(self, vid_nr, frame_nrs, dkey):
data_seq = [self.load_data(vid_nr, f_nr, dkey) for f_nr in frame_nrs]
return self.preprocess_sequence(data_seq, dkey, vid_nr)
def get_data(self, vid_nr, start):
if start is None:
max_start = self.n_images_dict[vid_nr] - self.clip_len + 1
if max_start == 1:
start = max_start
else:
start = np.random.randint(1, max_start)
frame_nrs = self.get_frame_nrs(vid_nr, start)
frame_seq = self.get_seq(vid_nr, frame_nrs, 'frame')
target_size = self.target_size_dict[vid_nr]
# if self.phase == 'test' and self.source in ('DReye',):
# return frame_nrs, frame_seq, target_size
sal_seq = self.get_seq(vid_nr, frame_nrs, 'sal')
fix_seq = torch.full(self.target_size, 0, dtype=torch.bool)
# fix used for nss aucj and aucs
# fix_seq = self.get_seq(vid_nr, frame_nrs, 'fix')
# 用 sal_seq替换fix_seq
return frame_nrs, frame_seq, sal_seq, fix_seq, target_size
def __getitem__(self, item):
vid_nr, start = self.samples[item]
data = self.get_data(vid_nr, start)
return data
class DADA2000Dataset(Dataset, utils.KwConfigClass):
img_channels = 1
n_train_val_videos = 797
test_vid_nrs = (798, 1013)
frame_rate = 30
source = 'DADA200'
dynamic = True
def __init__(self,
seq_len=12,
frame_modulo=5,
max_seq_len=1e6,
preproc_cfg=None,
out_size=(224, 538), phase='train', target_size=(224, 538),
debug=False, val_size=100, n_x_val=3, x_val_step=2,
x_val_seed=0, seq_per_vid=1, subset=None, verbose=1,
n_images_file='DADA_n_images.dat', seq_per_vid_val=2,
sal_offset=None):
self.phase = phase
self.train = phase == 'train'
if not self.train:
preproc_cfg = {}
elif preproc_cfg is None:
preproc_cfg = {}
preproc_cfg.update({
'rgb_mean': (0.485, 0.456, 0.406),
'rgb_std': (0.229, 0.224, 0.225),
})
self.preproc_cfg = preproc_cfg
self.out_size = out_size
self.debug = debug
self.val_size = val_size
self.n_x_val = n_x_val
self.x_val_step = x_val_step
self.x_val_seed = x_val_seed
self.seq_len = seq_len
self.seq_per_vid = seq_per_vid
self.seq_per_vid_val = seq_per_vid_val
self.frame_modulo = frame_modulo
self.clip_len = seq_len * frame_modulo
self.subset = subset
self.verbose = verbose
self.n_images_file = n_images_file
self.target_size = target_size
self.sal_offset = sal_offset
self.max_seq_len = max_seq_len
self._dir = None
self._n_images_dict = None
self.vid_nr_array = None
# Evaluation
if phase in ('eval', 'test'):
self.seq_len = int(1e6)
if self.phase in ('test',):
self.vid_nr_array = list(range(
self.test_vid_nrs[0], self.test_vid_nrs[1] + 1))
self.samples, self.target_size_dict = self.prepare_samples()
return
# Cross-validation split
n_videos = self.n_train_val_videos
assert(self.val_size <= n_videos // self.n_x_val)
assert(self.x_val_step < self.n_x_val)
vid_nr_array = np.arange(1, n_videos + 1)
if self.x_val_seed > 0:
np.random.seed(self.x_val_seed)
np.random.shuffle(vid_nr_array)
val_start = (len(vid_nr_array) - self.val_size) //\
(self.n_x_val - 1) * self.x_val_step
vid_nr_array = vid_nr_array.tolist()
if not self.train:
self.vid_nr_array =\
vid_nr_array[val_start:val_start + self.val_size]
else:
del vid_nr_array[val_start:val_start + self.val_size]
self.vid_nr_array = vid_nr_array
if self.subset is not None:
self.vid_nr_array =\
self.vid_nr_array[:int(len(self.vid_nr_array) * self.subset)]
self.samples, self.target_size_dict = self.prepare_samples()
@property
def n_images_dict(self):
if self._n_images_dict is None:
with open(config_path.parent / self.n_images_file, 'r') as f:
self._n_images_dict = {
idx + 1: int(line) for idx, line in enumerate(f)
if idx + 1 in self.vid_nr_array}
return self._n_images_dict
@property
def dir(self):
if self._dir is None:
self._dir = Path(os.environ["DADA2000_DATA_DIR"])
return self._dir
@property
def n_samples(self):
return len(self.vid_nr_array)
def __len__(self):
return len(self.samples)
def prepare_samples(self):
samples = []
too_short = 0
too_long = 0
for vid_nr, n_images in self.n_images_dict.items():
if self.phase in ('eval', 'test'):
samples += [
(vid_nr, offset + 1) for offset in range(self.frame_modulo)]
continue
# 帧数过小多大直接跳过
if n_images < self.clip_len:
too_short += 1
continue
if n_images // self.frame_modulo > self.max_seq_len:
too_long += 1
continue
#
if self.phase == 'train':
samples += [(vid_nr, None)] * self.seq_per_vid
continue
elif self.phase == 'valid':
x = n_images // (self.seq_per_vid_val * 2) - self.clip_len // 2
start = max(1, x)
end = min(n_images - self.clip_len, n_images - x)
samples += [
(vid_nr, int(start)) for start in
np.linspace(start, end, self.seq_per_vid_val)]
continue
# 打印数据集加载的基本信息
if self.phase not in ('eval', 'test') and self.n_images_dict:
n_loaded = len(self.n_images_dict) - too_short - too_long
print(f"{n_loaded} videos loaded "
f"({n_loaded / len(self.n_images_dict) * 100:.1f}%)")
print(f"{too_short} videos are too short "
f"({too_short / len(self.n_images_dict) * 100:.1f}%)")
print(f"{too_long} videos are too long "
f"({too_long / len(self.n_images_dict) * 100:.1f}%)")
target_size_dict = {
vid_nr: self.target_size for vid_nr in self.n_images_dict.keys()}
return samples, target_size_dict
def get_frame_nrs(self, vid_nr, start):
n_images = self.n_images_dict[vid_nr]
if self.phase in ('eval', 'test'):
return list(range(start, n_images + 1, self.frame_modulo))
return list(range(start, start + self.clip_len, self.frame_modulo))
def get_data_file(self, vid_nr, f_nr, dkey):
if dkey == 'frame':
folder = 'images'
elif dkey == 'sal':
folder = 'maps'
elif dkey == 'fix':
folder = 'fixation'
else:
raise ValueError(f'Unknown data key {dkey}')
###
img_path = str(self.dir / f'{vid_nr:04d}' / folder/ f'{f_nr:04d}.png')
return img_path
def load_data(self, vid_nr, f_nr, dkey):
read_flag = None if dkey == 'frame' else cv2.IMREAD_GRAYSCALE
data_file = self.get_data_file(vid_nr, f_nr, dkey)
if read_flag is not None:
data = cv2.imread(str(data_file), read_flag)
else:
data = cv2.imread(str(data_file))
if data is None:
raise FileNotFoundError(data_file)
if dkey == 'frame':
data = np.ascontiguousarray(data[:, :, ::-1])
if dkey == 'sal' and self.train and self.sal_offset is not None:
data += self.sal_offset
data[0, 0] = 0
return data
def preprocess_sequence(self, frame_seq, dkey, vid_nr):
transformations = []
if dkey == 'frame':
transformations.append(transforms.ToPILImage())
transformations.append(transforms.Resize(
self.out_size, interpolation=PIL.Image.LANCZOS))
transformations.append(transforms.ToTensor())
if dkey == 'frame' and 'rgb_mean' in self.preproc_cfg:
transformations.append(
transforms.Normalize(
self.preproc_cfg['rgb_mean'], self.preproc_cfg['rgb_std']))
elif dkey == 'sal':
transformations.append(transforms.ToPILImage())
transformations.append(transforms.Resize(
self.out_size, interpolation=PIL.Image.LANCZOS))
transformations.append(transforms.ToTensor())
transformations.append(transforms.Lambda(utils.normalize_tensor))
# elif dkey == 'fix':
# transformations.append(
# transforms.Lambda(lambda fix: torch.gt(fix, 0.5)))
##!
processing = transforms.Compose(transformations)
tensor = [processing(img) for img in frame_seq]
tensor = torch.stack(tensor)
return tensor
def get_seq(self, vid_nr, frame_nrs, dkey):
data_seq = [self.load_data(vid_nr, f_nr, dkey) for f_nr in frame_nrs]
return self.preprocess_sequence(data_seq, dkey, vid_nr)
def get_data(self, vid_nr, start):
if start is None:
max_start = self.n_images_dict[vid_nr] - self.clip_len + 1
if max_start == 1:
start = max_start
else:
start = np.random.randint(1, max_start)
frame_nrs = self.get_frame_nrs(vid_nr, start)
frame_seq = self.get_seq(vid_nr, frame_nrs, 'frame')
target_size = self.target_size_dict[vid_nr]
# if self.phase == 'test' and self.source in ('DADA2000',):
# return frame_nrs, frame_seq, target_size
sal_seq = self.get_seq(vid_nr, frame_nrs, 'sal')
fix_seq = torch.full(self.target_size, 0, dtype=torch.bool)
# fix used for nss aucj and aucs
# fix_seq = self.get_seq(vid_nr, frame_nrs, 'fix')
# 用 sal_seq替换fix_seq
return frame_nrs, frame_seq, sal_seq, fix_seq, target_size
def __getitem__(self, item):
vid_nr, start = self.samples[item]
data = self.get_data(vid_nr, start)
return data
class DT16Dataset(Dataset, utils.KwConfigClass):
img_channels = 1
n_train_val_videos = 115
test_vid_nrs = (115, 153) #1110
frame_rate = 24
source = 'DT16'
dynamic = True
def __init__(self,
seq_len=12,
frame_modulo=4,
max_seq_len=1e6,
preproc_cfg=None,
out_size=(224, 384), phase='train', target_size=(360, 640),
debug=False, val_size=19, n_x_val=3, x_val_step=2,
x_val_seed=0, seq_per_vid=1, subset=None, verbose=1,
n_images_file='DT16_n_images.dat', seq_per_vid_val=2,
sal_offset=None):
self.phase = phase
self.train = phase == 'train'
if not self.train:
preproc_cfg = {}
elif preproc_cfg is None:
preproc_cfg = {}
preproc_cfg.update({
'rgb_mean': (0.485, 0.456, 0.406),
'rgb_std': (0.229, 0.224, 0.225),
})
self.preproc_cfg = preproc_cfg
self.out_size = out_size
self.debug = debug
self.val_size = val_size
self.n_x_val = n_x_val
self.x_val_step = x_val_step
self.x_val_seed = x_val_seed
self.seq_len = seq_len
self.seq_per_vid = seq_per_vid
self.seq_per_vid_val = seq_per_vid_val
self.frame_modulo = frame_modulo
self.clip_len = seq_len * frame_modulo
self.subset = subset
self.verbose = verbose
self.n_images_file = n_images_file
self.target_size = target_size
self.sal_offset = sal_offset
self.max_seq_len = max_seq_len
self._dir = None
self._n_images_dict = None
self.vid_nr_array = None
# Evaluation
if phase in ('eval', 'test'):
self.seq_len = int(1e6)
if self.phase in ('test',):
self.vid_nr_array = list(range(
self.test_vid_nrs[0], self.test_vid_nrs[1] + 1))
self.samples, self.target_size_dict = self.prepare_samples()
return
# Cross-validation split
n_videos = self.n_train_val_videos
assert(self.val_size <= n_videos // self.n_x_val)
assert(self.x_val_step < self.n_x_val)
vid_nr_array = np.arange(1, n_videos + 1)
if self.x_val_seed > 0:
np.random.seed(self.x_val_seed)
np.random.shuffle(vid_nr_array)
val_start = (len(vid_nr_array) - self.val_size) //\
(self.n_x_val - 1) * self.x_val_step
vid_nr_array = vid_nr_array.tolist()
if not self.train:
self.vid_nr_array =\
vid_nr_array[val_start:val_start + self.val_size]
else:
del vid_nr_array[val_start:val_start + self.val_size]
self.vid_nr_array = vid_nr_array
if self.subset is not None:
self.vid_nr_array =\
self.vid_nr_array[:int(len(self.vid_nr_array) * self.subset)]
self.samples, self.target_size_dict = self.prepare_samples()
@property
def n_images_dict(self):
if self._n_images_dict is None:
with open(config_path.parent / self.n_images_file, 'r') as f:
self._n_images_dict = {
idx + 1: int(line) for idx, line in enumerate(f)
if idx + 1 in self.vid_nr_array}
return self._n_images_dict
@property
def dir(self):
if self._dir is None:
self._dir = Path(os.environ["DT16_DATA_DIR"])
return self._dir
@property
def n_samples(self):
return len(self.vid_nr_array)
def __len__(self):
return len(self.samples)
def prepare_samples(self):
samples = []
too_short = 0
too_long = 0
for vid_nr, n_images in self.n_images_dict.items():
if self.phase in ('eval', 'test'):
samples += [
(vid_nr, offset + 1) for offset in range(self.frame_modulo)]
continue
# 帧数过小多大直接跳过
if n_images < self.clip_len:
too_short += 1
continue
if n_images // self.frame_modulo > self.max_seq_len:
too_long += 1
continue
#
if self.phase == 'train':
samples += [(vid_nr, None)] * self.seq_per_vid
continue
elif self.phase == 'valid':
x = n_images // (self.seq_per_vid_val * 2) - self.clip_len // 2
start = max(1, x)
end = min(n_images - self.clip_len, n_images - x)
samples += [
(vid_nr, int(start)) for start in
np.linspace(start, end, self.seq_per_vid_val)]
continue
# 打印数据集加载的基本信息
if self.phase not in ('eval', 'test') and self.n_images_dict:
n_loaded = len(self.n_images_dict) - too_short - too_long
print(f"{n_loaded} videos loaded "
f"({n_loaded / len(self.n_images_dict) * 100:.1f}%)")
print(f"{too_short} videos are too short "
f"({too_short / len(self.n_images_dict) * 100:.1f}%)")
print(f"{too_long} videos are too long "
f"({too_long / len(self.n_images_dict) * 100:.1f}%)")
target_size_dict = {
vid_nr: self.target_size for vid_nr in self.n_images_dict.keys()}
return samples, target_size_dict
def get_frame_nrs(self, vid_nr, start):
n_images = self.n_images_dict[vid_nr]
if self.phase in ('eval', 'test'):
return list(range(start, n_images + 1, self.frame_modulo))
return list(range(start, start + self.clip_len, self.frame_modulo))
def get_data_file(self, vid_nr, f_nr, dkey):
if dkey == 'frame':
folder = 'images'
elif dkey == 'sal':
folder = 'maps'
elif dkey == 'fix':
folder = 'fixation'
else:
raise ValueError(f'Unknown data key {dkey}')
###
img_path = str(self.dir / f'{vid_nr:04d}' / folder/ f'{f_nr:04d}.png')
return img_path
def load_data(self, vid_nr, f_nr, dkey):
read_flag = None if dkey == 'frame' else cv2.IMREAD_GRAYSCALE
data_file = self.get_data_file(vid_nr, f_nr, dkey)
if read_flag is not None:
data = cv2.imread(str(data_file), read_flag)
else:
data = cv2.imread(str(data_file))
if data is None:
raise FileNotFoundError(data_file)
if dkey == 'frame':
data = np.ascontiguousarray(data[:, :, ::-1])
if dkey == 'sal' and self.train and self.sal_offset is not None:
data += self.sal_offset
data[0, 0] = 0
return data
def preprocess_sequence(self, frame_seq, dkey, vid_nr):
transformations = []
if dkey == 'frame':
transformations.append(transforms.ToPILImage())
transformations.append(transforms.Resize(
self.out_size, interpolation=PIL.Image.LANCZOS))
transformations.append(transforms.ToTensor())
if dkey == 'frame' and 'rgb_mean' in self.preproc_cfg:
transformations.append(
transforms.Normalize(
self.preproc_cfg['rgb_mean'], self.preproc_cfg['rgb_std']))
elif dkey == 'sal':
transformations.append(transforms.Lambda(utils.normalize_tensor))
# elif dkey == 'fix':
# transformations.append(
# transforms.Lambda(lambda fix: torch.gt(fix, 0.5)))
##!
processing = transforms.Compose(transformations)
tensor = [processing(img) for img in frame_seq]
tensor = torch.stack(tensor)
return tensor
def get_seq(self, vid_nr, frame_nrs, dkey):
data_seq = [self.load_data(vid_nr, f_nr, dkey) for f_nr in frame_nrs]
return self.preprocess_sequence(data_seq, dkey, vid_nr)
def get_data(self, vid_nr, start):
if start is None:
max_start = self.n_images_dict[vid_nr] - self.clip_len + 1
if max_start == 1:
start = max_start
else:
start = np.random.randint(1, max_start)
# print('vid_nr:', vid_nr, '\t start:', start)
frame_nrs = self.get_frame_nrs(vid_nr, start)
frame_seq = self.get_seq(vid_nr, frame_nrs, 'frame')
target_size = self.target_size_dict[vid_nr]
# if self.phase == 'test' and self.source in ('DReye',):
# return frame_nrs, frame_seq, target_size
sal_seq = self.get_seq(vid_nr, frame_nrs, 'sal')
fix_seq = torch.full(self.target_size, 0, dtype=torch.bool)
# fix used for nss aucj and aucs
# fix_seq = self.get_seq(vid_nr, frame_nrs, 'fix')
# 用 sal_seq替换fix_seq
return frame_nrs, frame_seq, sal_seq, fix_seq, target_size
def __getitem__(self, item):
vid_nr, start = self.samples[item]
data = self.get_data(vid_nr, start)
return data
class BDDADataset(Dataset, utils.KwConfigClass):
img_channels = 1
n_train_val_videos = 926
test_vid_nrs = (1127, 1429) #1110
frame_rate = 30
source = 'BDDA'
dynamic = True
def __init__(self,
seq_len=12,
frame_modulo=5,
max_seq_len=1e6,
preproc_cfg=None,
out_size=(224, 384), phase='train', target_size=(360, 640),
debug=False, val_size=200, n_x_val=3, x_val_step=2,
x_val_seed=0, seq_per_vid=1, subset=None, verbose=1,
n_images_file='BDDA_n_images.dat', seq_per_vid_val=2,
sal_offset=None):
self.phase = phase
self.train = phase == 'train'
if not self.train:
preproc_cfg = {}
elif preproc_cfg is None:
preproc_cfg = {}
preproc_cfg.update({
'rgb_mean': (0.485, 0.456, 0.406),
'rgb_std': (0.229, 0.224, 0.225),
})
self.preproc_cfg = preproc_cfg
self.out_size = out_size
self.debug = debug
self.val_size = val_size
self.n_x_val = n_x_val
self.x_val_step = x_val_step
self.x_val_seed = x_val_seed
self.seq_len = seq_len
self.seq_per_vid = seq_per_vid
self.seq_per_vid_val = seq_per_vid_val
self.frame_modulo = frame_modulo
self.clip_len = seq_len * frame_modulo
self.subset = subset
self.verbose = verbose
self.n_images_file = n_images_file
self.target_size = target_size
self.sal_offset = sal_offset
self.max_seq_len = max_seq_len
self._dir = None
self._n_images_dict = None
self.vid_nr_array = None
# Evaluation
if phase in ('eval', 'test'):
self.seq_len = int(1e6)
if self.phase in ('test',):
self.vid_nr_array = list(range(
self.test_vid_nrs[0], self.test_vid_nrs[1] + 1))
self.samples, self.target_size_dict = self.prepare_samples()
return
# Cross-validation split
n_videos = self.n_train_val_videos
assert(self.val_size <= n_videos // self.n_x_val)
assert(self.x_val_step < self.n_x_val)
vid_nr_array = np.arange(1, n_videos + 1)
if self.x_val_seed > 0:
np.random.seed(self.x_val_seed)
np.random.shuffle(vid_nr_array)
val_start = (len(vid_nr_array) - self.val_size) //\
(self.n_x_val - 1) * self.x_val_step
vid_nr_array = vid_nr_array.tolist()
if not self.train:
self.vid_nr_array =\
vid_nr_array[val_start:val_start + self.val_size]
else:
del vid_nr_array[val_start:val_start + self.val_size]
self.vid_nr_array = vid_nr_array
if self.subset is not None:
self.vid_nr_array =\
self.vid_nr_array[:int(len(self.vid_nr_array) * self.subset)]
self.samples, self.target_size_dict = self.prepare_samples()
@property
def n_images_dict(self):
if self._n_images_dict is None:
with open(config_path.parent / self.n_images_file, 'r') as f:
self._n_images_dict = {
idx + 1: int(line) for idx, line in enumerate(f)
if idx + 1 in self.vid_nr_array}
return self._n_images_dict
@property
def dir(self):
if self._dir is None:
self._dir = Path(os.environ["BDDA_DATA_DIR"])
return self._dir
@property
def n_samples(self):
return len(self.vid_nr_array)
def __len__(self):
return len(self.samples)
def prepare_samples(self):
samples = []
too_short = 0
too_long = 0
for vid_nr, n_images in self.n_images_dict.items():
if self.phase in ('eval', 'test'):
samples += [
(vid_nr, offset + 1) for offset in range(self.frame_modulo)]
continue
# 帧数过小多大直接跳过
if n_images < self.clip_len:
too_short += 1
continue
if n_images // self.frame_modulo > self.max_seq_len:
too_long += 1
continue
#
if self.phase == 'train':
samples += [(vid_nr, None)] * self.seq_per_vid
continue
elif self.phase == 'valid':
x = n_images // (self.seq_per_vid_val * 2) - self.clip_len // 2
start = max(1, x)
end = min(n_images - self.clip_len, n_images - x)
samples += [
(vid_nr, int(start)) for start in
np.linspace(start, end, self.seq_per_vid_val)]
continue
# 打印数据集加载的基本信息
if self.phase not in ('eval', 'test') and self.n_images_dict:
n_loaded = len(self.n_images_dict) - too_short - too_long
print(f"{n_loaded} videos loaded "
f"({n_loaded / len(self.n_images_dict) * 100:.1f}%)")
print(f"{too_short} videos are too short "
f"({too_short / len(self.n_images_dict) * 100:.1f}%)")
print(f"{too_long} videos are too long "
f"({too_long / len(self.n_images_dict) * 100:.1f}%)")
target_size_dict = {
vid_nr: self.target_size for vid_nr in self.n_images_dict.keys()}
return samples, target_size_dict
def get_frame_nrs(self, vid_nr, start):
n_images = self.n_images_dict[vid_nr]
if self.phase in ('eval', 'test'):
return list(range(start, n_images + 1, self.frame_modulo))
return list(range(start, start + self.clip_len, self.frame_modulo))
def get_data_file(self, vid_nr, f_nr, dkey):
if dkey == 'frame':
folder = 'images'
elif dkey == 'sal':
folder = 'new_maps'
elif dkey == 'fix':
folder = 'fixation'
else:
raise ValueError(f'Unknown data key {dkey}')
###
img_path = str(self.dir / f'{vid_nr:04d}' / folder/ f'{f_nr:04d}.png')
return img_path
def load_data(self, vid_nr, f_nr, dkey):
read_flag = None if dkey == 'frame' else cv2.IMREAD_GRAYSCALE
data_file = self.get_data_file(vid_nr, f_nr, dkey)
if read_flag is not None:
data = cv2.imread(str(data_file), read_flag)
else:
data = cv2.imread(str(data_file))
if data is None:
raise FileNotFoundError(data_file)
if dkey == 'frame':
data = np.ascontiguousarray(data[:, :, ::-1])
if dkey == 'sal' and self.train and self.sal_offset is not None:
data += self.sal_offset
data[0, 0] = 0
return data
def preprocess_sequence(self, frame_seq, dkey, vid_nr):
transformations = []
if dkey == 'frame':
transformations.append(transforms.ToPILImage())
transformations.append(transforms.Resize(
self.out_size, interpolation=PIL.Image.LANCZOS))
transformations.append(transforms.ToTensor())
if dkey == 'frame' and 'rgb_mean' in self.preproc_cfg:
transformations.append(
transforms.Normalize(
self.preproc_cfg['rgb_mean'], self.preproc_cfg['rgb_std']))
elif dkey == 'sal':
transformations.append(transforms.Lambda(utils.normalize_tensor))
# elif dkey == 'fix':
# transformations.append(
# transforms.Lambda(lambda fix: torch.gt(fix, 0.5)))
##!
processing = transforms.Compose(transformations)
tensor = [processing(img) for img in frame_seq]
tensor = torch.stack(tensor)
return tensor
def get_seq(self, vid_nr, frame_nrs, dkey):
data_seq = [self.load_data(vid_nr, f_nr, dkey) for f_nr in frame_nrs]
return self.preprocess_sequence(data_seq, dkey, vid_nr)
def get_data(self, vid_nr, start):
if start is None:
max_start = self.n_images_dict[vid_nr] - self.clip_len + 1
if max_start == 1:
start = max_start
else:
start = np.random.randint(1, max_start)
frame_nrs = self.get_frame_nrs(vid_nr, start)
frame_seq = self.get_seq(vid_nr, frame_nrs, 'frame')
target_size = self.target_size_dict[vid_nr]
# if self.phase == 'test' and self.source in ('DReye',):
# return frame_nrs, frame_seq, target_size
sal_seq = self.get_seq(vid_nr, frame_nrs, 'sal')
fix_seq = torch.full(self.target_size, 0, dtype=torch.bool)
# fix used for nss aucj and aucs
# fix_seq = self.get_seq(vid_nr, frame_nrs, 'fix')
# 用 sal_seq替换fix_seq
return frame_nrs, frame_seq, sal_seq, fix_seq, target_size
def __getitem__(self, item):
vid_nr, start = self.samples[item]
data = self.get_data(vid_nr, start)
return data | 37.322925 | 100 | 0.580622 | 6,093 | 45,422 | 4.033153 | 0.048252 | 0.035403 | 0.028078 | 0.033572 | 0.865142 | 0.856027 | 0.836982 | 0.831204 | 0.819281 | 0.812078 | 0 | 0.020974 | 0.315618 | 45,422 | 1,217 | 101 | 37.322925 | 0.769543 | 0.037141 | 0 | 0.80995 | 0 | 0 | 0.050858 | 0.006323 | 0 | 0 | 0 | 0 | 0.00995 | 1 | 0.069652 | false | 0 | 0.014925 | 0.012935 | 0.18607 | 0.01194 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7d85009dbddff3d5320b8d2614d51bdf84e42b6 | 27,932 | py | Python | kairos_minerl/src/kairos_minerl/gail_wrapper.py | viniciusguigo/kairos_minerl_basalt | 8f76e1d293dbcf62653ed3f7f326bd090a0af6f0 | [
"MIT"
] | 26 | 2021-12-07T09:52:06.000Z | 2022-03-13T20:08:44.000Z | kairos_minerl/src/kairos_minerl/gail_wrapper.py | viniciusguigo/kairos_minerl_basalt | 8f76e1d293dbcf62653ed3f7f326bd090a0af6f0 | [
"MIT"
] | null | null | null | kairos_minerl/src/kairos_minerl/gail_wrapper.py | viniciusguigo/kairos_minerl_basalt | 8f76e1d293dbcf62653ed3f7f326bd090a0af6f0 | [
"MIT"
] | 2 | 2021-12-11T18:29:26.000Z | 2022-01-12T18:46:42.000Z | import numpy as np
import gym
#*******************************************************************
# FIND CAVE TASK
#*******************************************************************
# custom action wrapper for complete GAIL agent for MineRL
class ActionShaping_FindCave(gym.ActionWrapper):
def __init__(self, env, camera_angle=10, always_attack=False):
super().__init__(env)
self.camera_angle = camera_angle
self.always_attack = always_attack
self._actions = [
[('attack', 1)], #0
[('forward', 1)], #1
[('forward', 1), ('jump', 1)], #2
[('camera', [-self.camera_angle, 0])], #3
[('camera', [self.camera_angle, 0])], #4
[('camera', [0, self.camera_angle])], #5
[('camera', [0, -self.camera_angle])], #6
[('back', 1)], #7
[('left', 1)], #8
[('right', 1)], #9
[('jump', 1)], #10
#[('equip',11), ('use', 1)],
[('forward', 1), ('attack', 1)], #11
]
self.actions = []
for actions in self._actions:
act = self.env.action_space.noop()
for a, v in actions:
act[a] = v
if self.always_attack:
act['attack'] = 1
self.actions.append(act)
# add no-op action
act = self.env.action_space.noop()
self.actions.append(act)
self.action_space = gym.spaces.Discrete(len(self.actions))
def action(self, action):
return self.actions[action]
def processed_actions_to_wrapper_actions_FindCave(dataset_actions, camera_margin=5):
"""
Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.
"""
# There are dummy dimensions of shape one
camera_actions = dataset_actions[:,10:].astype(np.float32)
attack_actions = dataset_actions[:,0].astype(np.float32)
forward_actions = dataset_actions[:,3].astype(np.float32)
jump_actions = dataset_actions[:,4].astype(np.float32)
back_actions = dataset_actions[:,1].astype(np.float32)
left_actions = dataset_actions[:,5].astype(np.float32)
right_actions = dataset_actions[:,6].astype(np.float32)
equip_actions = dataset_actions[:,2]
use_actions = dataset_actions[:,9].astype(np.float32)
sneak_actions = dataset_actions[:,7].astype(np.float32)
sprint_actions = dataset_actions[:,8].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
for i in range(len(camera_actions)):
# Moving camera is most important (horizontal first)
if camera_actions[i][0] < -camera_margin:
actions[i] = 3
elif camera_actions[i][0] > camera_margin:
actions[i] = 4
elif camera_actions[i][1] > camera_margin:
actions[i] = 5
elif camera_actions[i][1] < -camera_margin:
actions[i] = 6
elif forward_actions[i] == 1:
if jump_actions[i] == 1:
actions[i] = 2
elif attack_actions[i] == 1:
actions[i] = 11
else:
actions[i] = 1
elif attack_actions[i] == 1:
actions[i] = 0
elif left_actions[i] == 1:
actions[i] = 8
elif right_actions[i] ==1:
actions[i] = 9
elif back_actions[i] == 1:
actions[i] = 7
elif jump_actions[i] == 1:
actions[i] = 10
else:
# No reasonable mapping (would be no-op)
actions[i] = 12
return actions
#*******************************************************************
# WATERFALL TASK
#*******************************************************************
# custom action wrapper for complete GAIL agent for MineRL
class ActionShaping_Waterfall(gym.ActionWrapper):
def __init__(self, env, camera_angle=10, always_attack=False):
super().__init__(env)
self.camera_angle = camera_angle
self.always_attack = always_attack
self._actions = [
[('attack', 1)], #0
[('forward', 1)], #1
[('forward', 1), ('jump', 1)], #2
[('camera', [-self.camera_angle, 0])], #3
[('camera', [self.camera_angle, 0])], #4
[('camera', [0, self.camera_angle])], #5
[('camera', [0, -self.camera_angle])], #6
[('back', 1)], #7
[('left', 1)], #8
[('right', 1)], #9
[('jump', 1)], #10
[('forward', 1), ('attack', 1)], #11
[('equip','water_bucket'), ('use', 1)], #12 #water bucket
[('equip','stone_pickaxe'), ('use', 1)], #13 #stone pickaxe
[('equip','stone_shovel'), ('use', 1)], #14 #stone shovel
[('equip','cobblestone'), ('use', 1)], #15 #cobblestone
#[('equip',1), ('use', 1)], #16 #bucket
]
self.actions = []
for actions in self._actions:
act = self.env.action_space.noop()
for a, v in actions:
act[a] = v
if self.always_attack:
act['attack'] = 1
self.actions.append(act)
# add no-op action
act = self.env.action_space.noop()
self.actions.append(act)
self.action_space = gym.spaces.Discrete(len(self.actions))
def action(self, action):
return self.actions[action]
def processed_actions_to_wrapper_actions_Waterfall(dataset_actions, camera_margin=5):
"""
Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.
"""
# There are dummy dimensions of shape one
camera_actions = dataset_actions[:,10:].astype(np.float32)
attack_actions = dataset_actions[:,0].astype(np.float32)
forward_actions = dataset_actions[:,3].astype(np.float32)
jump_actions = dataset_actions[:,4].astype(np.float32)
back_actions = dataset_actions[:,1].astype(np.float32)
left_actions = dataset_actions[:,5].astype(np.float32)
right_actions = dataset_actions[:,6].astype(np.float32)
equip_actions = dataset_actions[:,2]
use_actions = dataset_actions[:,9].astype(np.float32)
sneak_actions = dataset_actions[:,7].astype(np.float32)
sprint_actions = dataset_actions[:,8].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
#Enum(air,bucket,carrot,cobblestone,fence,fence_gate,none,other,snowball,stone_pickaxe,stone_shovel,water_bucket,wheat,wheat_seeds),
equip_actions_dict = dict()
equip_actions_dict['water_bucket'] = 12
equip_actions_dict['stone_pickaxe'] = 13
equip_actions_dict['stone_shovel'] = 14
equip_actions_dict['cobblestone'] = 15
#equip_actions_dict['bucket'] = 16
# step through all actions
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
# keep track of what is currently equipped
if equip_actions[i] != 'none' and equip_actions[i] in equip_actions_dict:
currently_equipped_item = equip_actions[i]
# equip and use actions are the most important
if use_actions[i] == 1:
actions[i] = equip_actions_dict[currently_equipped_item]
# Moving camera is second most important (horizontal first)
elif camera_actions[i][0] < -camera_margin:
actions[i] = 3
elif camera_actions[i][0] > camera_margin:
actions[i] = 4
elif camera_actions[i][1] > camera_margin:
actions[i] = 5
elif camera_actions[i][1] < -camera_margin:
actions[i] = 6
elif forward_actions[i] == 1:
if jump_actions[i] == 1:
actions[i] = 2
elif attack_actions[i] == 1:
actions[i] = 11
else:
actions[i] = 1
elif attack_actions[i] == 1:
actions[i] = 0
elif left_actions[i] == 1:
actions[i] = 8
elif right_actions[i] ==1:
actions[i] = 9
elif back_actions[i] == 1:
actions[i] = 7
elif jump_actions[i] == 1:
actions[i] = 10
else:
# No reasonable mapping (would be no-op)
actions[i] = 16
return actions
#*******************************************************************
# ANIMAL PEN TASK
#*******************************************************************
# custom action wrapper for complete GAIL agent for MineRL
class ActionShaping_Animalpen(gym.ActionWrapper):
def __init__(self, env, camera_angle=10, always_attack=False):
super().__init__(env)
self.equip_mapping = {'air':0,'bucket':1,'carrot':2,'cobblestone':3,'fence':4,'fence_gate':5,
'none':6,'other':7,'snowball':8,'stone_pickaxe':9,'stone_shovel':10,'water_bucket':11,
'wheat':12,'wheat_seeds':13}
self.camera_angle = camera_angle
self.always_attack = always_attack
self._actions = [
[('attack', 1)], #0
[('forward', 1)], #1
[('forward', 1), ('jump', 1)], #2
[('camera', [-self.camera_angle, 0])], #3
[('camera', [self.camera_angle, 0])], #4
[('camera', [0, self.camera_angle])], #5
[('camera', [0, -self.camera_angle])], #6
[('back', 1)], #7
[('left', 1)], #8
[('right', 1)], #9
[('jump', 1)], #10
[('forward', 1), ('attack', 1)], #11
[('equip','carrot')], #12 #carrot
[('equip','fence'), ('use', 1)], #13 #fence
[('equip','fence_gate'), ('use', 1)], #14 #fence_gate
[('equip','wheat')], #15 #wheat
[('equip','wheat_seeds')], #16 #wheat_seeds
]
self.actions = []
for actions in self._actions:
act = self.env.action_space.noop()
for a, v in actions:
act[a] = v
if self.always_attack:
act['attack'] = 1
self.actions.append(act)
# add no-op action
act = self.env.action_space.noop()
self.actions.append(act)
self.action_space = gym.spaces.Discrete(len(self.actions))
def action(self, action):
return self.actions[action]
def processed_actions_to_wrapper_actions_Animalpen(dataset_actions, camera_margin=5):
"""
Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.
"""
# There are dummy dimensions of shape one
camera_actions = dataset_actions[:,10:].astype(np.float32)
attack_actions = dataset_actions[:,0].astype(np.float32)
forward_actions = dataset_actions[:,3].astype(np.float32)
jump_actions = dataset_actions[:,4].astype(np.float32)
back_actions = dataset_actions[:,1].astype(np.float32)
left_actions = dataset_actions[:,5].astype(np.float32)
right_actions = dataset_actions[:,6].astype(np.float32)
equip_actions = dataset_actions[:,2]
use_actions = dataset_actions[:,9].astype(np.float32)
sneak_actions = dataset_actions[:,7].astype(np.float32)
sprint_actions = dataset_actions[:,8].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
#Enum(air,bucket,carrot,cobblestone,fence,fence_gate,none,other,snowball,stone_pickaxe,stone_shovel,water_bucket,wheat,wheat_seeds)
equip_actions_dict = dict()
equip_actions_dict['carrot'] = 12
equip_actions_dict['fence'] = 13
equip_actions_dict['fence_gate'] = 14
equip_actions_dict['wheat'] = 15
equip_actions_dict['wheat_seeds'] = 16
# step through all actions
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
# keep track of what is currently equipped
if equip_actions[i] != 'none' and equip_actions[i] in equip_actions_dict:
currently_equipped_item = equip_actions[i]
# equip and use actions are the most important
if equip_actions[i] == 'carrot':
actions[i] = equip_actions_dict['carrot']
elif equip_actions[i] == 'wheat':
actions[i] = equip_actions_dict['wheat']
elif equip_actions[i] == 'wheat_seeds':
actions[i] = equip_actions_dict['wheat_seeds']
elif use_actions[i] == 1:
actions[i] = equip_actions_dict[currently_equipped_item]
# Moving camera is second most important (horizontal first)
elif camera_actions[i][0] < -camera_margin:
actions[i] = 3
elif camera_actions[i][0] > camera_margin:
actions[i] = 4
elif camera_actions[i][1] > camera_margin:
actions[i] = 5
elif camera_actions[i][1] < -camera_margin:
actions[i] = 6
elif forward_actions[i] == 1:
if jump_actions[i] == 1:
actions[i] = 2
elif attack_actions[i] == 1:
actions[i] = 11
else:
actions[i] = 1
elif attack_actions[i] == 1:
actions[i] = 0
elif left_actions[i] == 1:
actions[i] = 8
elif right_actions[i] ==1:
actions[i] = 9
elif back_actions[i] == 1:
actions[i] = 7
elif jump_actions[i] == 1:
actions[i] = 10
else:
# No reasonable mapping (would be no-op)
actions[i] = 17
return actions
#*******************************************************************
# VILLAGE HOUSE TASK
#*******************************************************************
# custom action wrapper for complete GAIL agent for MineRL
class ActionShaping_Villagehouse(gym.ActionWrapper):
def __init__(self, env, camera_angle=10, always_attack=False):
super().__init__(env)
self.equip_mapping = {'acacia_door':0,'acacia_fence':1,'cactus':2,'cobblestone':3,'dirt':4,'fence':5,'flower_pot':6,
'glass':7,'ladder':8,'log#0':9,'log#1':10,'log2#0':12,'none':13,'other':14,'planks#0':15,
'planks#1':16,'planks#4':17,'red_flower':18,'sand,sandstone#0':19,'sandstone#2':20,'sandstone_stairs':21,
'snowball':22,'spruce_door':23,'spruce_fence':24,'stone_axe':25,'stone_pickaxe':26,'stone_stairs':27,
'torch':28,'wooden_door':29,'wooden_pressure_plate':30}
self.camera_angle = camera_angle
self.always_attack = always_attack
self._actions = [
[('attack', 1)], #0
[('forward', 1)], #1
[('forward', 1), ('jump', 1)], #2
[('camera', [-self.camera_angle, 0])], #3
[('camera', [self.camera_angle, 0])], #4
[('camera', [0, self.camera_angle])], #5
[('camera', [0, -self.camera_angle])], #6
[('back', 1)], #7
[('left', 1)], #8
[('right', 1)], #9
[('jump', 1)], #10
[('forward', 1), ('attack', 1)], #11
[('equip','acacia_door'), ('use', 1)], #12
[('equip','acacia_fence'), ('use', 1)], #13
[('equip','cactus'), ('use', 1)], #14
[('equip','cobblestone'), ('use', 1)], #15
[('equip','dirt'), ('use', 1)], #16
[('equip','fence'), ('use', 1)], #17
[('equip','flower_pot'), ('use', 1)], #18
[('equip','glass'), ('use', 1)], #19
[('equip','ladder'), ('use', 1)], #20
[('equip','log#0'), ('use', 1)], #21
[('equip','log#1'), ('use', 1)], #22
[('equip','log2#0'), ('use', 1)], #23
[('equip','planks#0'), ('use', 1)], #24
[('equip','planks#1'), ('use', 1)], #25
[('equip','planks#4'), ('use', 1)], #26
[('equip','red_flower'), ('use', 1)], #27
[('equip','sand,sandstone#0'), ('use', 1)], #28
[('equip','sandstone#2'), ('use', 1)], #29
[('equip','sandstone_stairs'), ('use', 1)],#30
[('equip','spruce_door'), ('use', 1)], #31
[('equip','spruce_fence'), ('use', 1)], #32
[('equip','stone_axe'), ('use', 1)], #33
[('equip','stone_pickaxe'), ('use', 1)], #34
[('equip','stone_stairs'), ('use', 1)], #35
[('equip','torch'), ('use', 1)], #36
[('equip','wooden_door'), ('use', 1)], #37
[('equip','wooden_pressure_plate'), ('use', 1)], #38
]
self.actions = []
for actions in self._actions:
act = self.env.action_space.noop()
for a, v in actions:
act[a] = v
if self.always_attack:
act['attack'] = 1
self.actions.append(act)
# add no-op action
act = self.env.action_space.noop()
self.actions.append(act)
self.action_space = gym.spaces.Discrete(len(self.actions))
def action(self, action):
return self.actions[action]
def processed_actions_to_wrapper_actions_Villagehouse(dataset_actions, camera_margin=5):
"""
Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.
"""
# There are dummy dimensions of shape one
camera_actions = dataset_actions[:,10:].astype(np.float32)
attack_actions = dataset_actions[:,0].astype(np.float32)
forward_actions = dataset_actions[:,3].astype(np.float32)
jump_actions = dataset_actions[:,4].astype(np.float32)
back_actions = dataset_actions[:,1].astype(np.float32)
left_actions = dataset_actions[:,5].astype(np.float32)
right_actions = dataset_actions[:,6].astype(np.float32)
equip_actions = dataset_actions[:,2]
use_actions = dataset_actions[:,9].astype(np.float32)
sneak_actions = dataset_actions[:,7].astype(np.float32)
sprint_actions = dataset_actions[:,8].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
#Enum(acacia_door,acacia_fence,cactus,cobblestone,dirt,fence,flower_pot,glass,ladder,log#0,log#1,log2#0,none,other,planks#0,planks#1,planks#4,red_flower,sand,sandstone#0,sandstone#2,sandstone_stairs,snowball,spruce_door,spruce_fence,stone_axe,stone_pickaxe,stone_stairs,torch,wooden_door,wooden_pressure_plate)
equip_actions_dict = dict()
equip_actions_dict['carrot'] = 12
equip_actions_dict['fence'] = 13
equip_actions_dict['fence_gate'] = 14
equip_actions_dict['wheat'] = 15
equip_actions_dict['wheat_seeds'] = 16
equip_actions_dict['acacia_door']=12
equip_actions_dict['acacia_fence']=13
equip_actions_dict['cactus']=14
equip_actions_dict['cobblestone']=15
equip_actions_dict['dirt']=16
equip_actions_dict['fence']=17
equip_actions_dict['flower_pot']=18
equip_actions_dict['glass']=19
equip_actions_dict['ladder']=20
equip_actions_dict['log#0']=21
equip_actions_dict['log#1']=22
equip_actions_dict['log2#0']=23
equip_actions_dict['planks#0']=24
equip_actions_dict['planks#1']=25
equip_actions_dict['planks#4']=26
equip_actions_dict['red_flower']=27
equip_actions_dict['sand,sandstone#0']=28
equip_actions_dict['sandstone#2']=29
equip_actions_dict['sandstone_stairs']=30
equip_actions_dict['spruce_door']=31
equip_actions_dict['spruce_fence']=32
equip_actions_dict['stone_axe']=33
equip_actions_dict['stone_pickaxe']=34
equip_actions_dict['stone_stairs']=35
equip_actions_dict['torch']=36
equip_actions_dict['wooden_door']=37
equip_actions_dict['wooden_pressure_plate']=38
# step through all actions
currently_equipped_item = 'stone_pickaxe'
for i in range(len(camera_actions)):
# keep track of what is currently equipped
if equip_actions[i] != 'none' and equip_actions[i] in equip_actions_dict:
currently_equipped_item = equip_actions[i]
# equip and use actions are the most important
if use_actions[i] == 1:
actions[i] = equip_actions_dict[currently_equipped_item]
# Moving camera is second most important (horizontal first)
elif camera_actions[i][0] < -camera_margin:
actions[i] = 3
elif camera_actions[i][0] > camera_margin:
actions[i] = 4
elif camera_actions[i][1] > camera_margin:
actions[i] = 5
elif camera_actions[i][1] < -camera_margin:
actions[i] = 6
elif forward_actions[i] == 1:
if jump_actions[i] == 1:
actions[i] = 2
elif attack_actions[i] == 1:
actions[i] = 11
else:
actions[i] = 1
elif attack_actions[i] == 1:
actions[i] = 0
elif left_actions[i] == 1:
actions[i] = 8
elif right_actions[i] ==1:
actions[i] = 9
elif back_actions[i] == 1:
actions[i] = 7
elif jump_actions[i] == 1:
actions[i] = 10
else:
# No reasonable mapping (would be no-op)
actions[i] = 39
return actions
# custom action wrapper for Simple GAIL agent for MineRL
#*******************************************************************
# NAVIGATION SUBTASK
#*******************************************************************
# custom action wrapper for complete GAIL agent for MineRL
class ActionShaping_Navigation(gym.ActionWrapper):
def __init__(self, env, camera_angle=10, always_attack=False):
super().__init__(env)
self.camera_angle = camera_angle
self.always_attack = always_attack
self._actions = [
[('attack', 1)], #0
[('forward', 1)], #1
[('forward', 1), ('jump', 1)], #2
[('camera', [0, self.camera_angle])], #3 #horizontal (right)
[('camera', [0, -self.camera_angle])], #4 #horizontal (left)
[('camera', [-self.camera_angle, 0])], #5 #verticle
[('camera', [self.camera_angle, 0])], #6 #verticle
[('back', 1)], #7
[('left', 1)], #8
[('right', 1)], #9
[('jump', 1)], #10
#[('equip',11), ('use', 1)],
[('forward', 1), ('attack', 1)], #11
]
self.actions = []
for actions in self._actions:
act = self.env.action_space.noop()
for a, v in actions:
act[a] = v
if self.always_attack:
act['attack'] = 1
self.actions.append(act)
# add no-op action
act = self.env.action_space.noop()
self.actions.append(act)
self.action_space = gym.spaces.Discrete(len(self.actions))
def action(self, action):
return self.actions[action]
return self.actions[action]
def processed_actions_to_wrapper_actions_Navigation(dataset_actions, camera_margin=5):
"""
Turn a batch of actions from dataset (`batch_iter`) to a numpy
array that corresponds to batch of actions of ActionShaping wrapper (_actions).
Camera margin sets the threshold what is considered "moving camera".
Note: Hardcoded to work for actions in ActionShaping._actions, with "intuitive"
ordering of actions.
If you change ActionShaping._actions, remember to change this!
Array elements are integers corresponding to actions, or "-1"
for actions that did not have any corresponding discrete match.
"""
# There are dummy dimensions of shape one
camera_actions = dataset_actions[:,10:].astype(np.float32)
attack_actions = dataset_actions[:,0].astype(np.float32)
forward_actions = dataset_actions[:,3].astype(np.float32)
jump_actions = dataset_actions[:,4].astype(np.float32)
back_actions = dataset_actions[:,1].astype(np.float32)
left_actions = dataset_actions[:,5].astype(np.float32)
right_actions = dataset_actions[:,6].astype(np.float32)
equip_actions = dataset_actions[:,2]
use_actions = dataset_actions[:,9].astype(np.float32)
sneak_actions = dataset_actions[:,7].astype(np.float32)
sprint_actions = dataset_actions[:,8].astype(np.float32)
batch_size = len(camera_actions)
actions = np.zeros((batch_size,), dtype=int)
for i in range(len(camera_actions)):
# Moving camera is most important (horizontal first!!!)
if camera_actions[i][1] < -camera_margin:
actions[i] = 3
elif camera_actions[i][1] > camera_margin:
actions[i] = 4
elif camera_actions[i][0] > camera_margin:
actions[i] = 5
elif camera_actions[i][0] < -camera_margin:
actions[i] = 6
elif forward_actions[i] == 1:
if jump_actions[i] == 1:
actions[i] = 2
elif attack_actions[i] == 1:
actions[i] = 11
else:
actions[i] = 1
elif attack_actions[i] == 1:
actions[i] = 0
elif left_actions[i] == 1:
actions[i] = 8
elif right_actions[i] ==1:
actions[i] = 9
elif jump_actions[i] == 1:
actions[i] = 10
elif back_actions[i] == 1:
actions[i] = 7
elif sum(dataset_actions[i,(0,1,3,4,5,6,7,8,9)].astype(np.float32)):
# actual noop
actions[i] = 12
else: #catch everthing else and remove later
actions[i] = 99
return actions
# return only image as the observation
class PovOnlyObservation(gym.ObservationWrapper):
def __init__(self, env):
super().__init__(env)
self.observation_space = self.env.observation_space['pov']
def observation(self, observation):
obs = observation['pov'].squeeze().astype(np.float32)
# Transpose observations to be channel-first (BCHW instead of BHWC)
obs = obs.transpose(2, 0, 1)
# Normalize observations
obs /= 255.0
return obs | 40.422576 | 314 | 0.572247 | 3,403 | 27,932 | 4.511901 | 0.070232 | 0.077113 | 0.033998 | 0.039599 | 0.81412 | 0.793279 | 0.789501 | 0.789501 | 0.787938 | 0.772046 | 0 | 0.038776 | 0.267829 | 27,932 | 691 | 315 | 40.422576 | 0.711995 | 0.210798 | 0 | 0.787149 | 0 | 0 | 0.088037 | 0.00291 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034137 | false | 0 | 0.004016 | 0.008032 | 0.074297 | 0.01004 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.